Do you want to publish a course? Click here

A Priori Generalization Error Analysis of Two-Layer Neural Networks for Solving High Dimensional Schrodinger Eigenvalue Problems

119   0   0.0 ( 0 )
 Added by Yulong Lu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

This paper analyzes the generalization error of two-layer neural networks for computing the ground state of the Schrodinger operator on a $d$-dimensional hypercube. We prove that the convergence rate of the generalization error is independent of the dimension $d$, under the a priori assumption that the ground state lies in a spectral Barron space. We verify such assumption by proving a new regularity estimate for the ground state in the spectral Barron space. The later is achieved by a fixed point argument based on the Krein-Rutman theorem.



rate research

Read More

73 - Brendan Keith 2020
A number of non-standard finite element methods have been proposed in recent years, each of which derives from a specific class of PDE-constrained norm minimization problems. The most notable examples are $mathcal{L}mathcal{L}^*$ methods. In this work, we argue that all high-order methods in this class should be expected to deliver substandard uniform h-refinement convergence rates. In fact, one may not even see rates proportional to the polynomial order $p > 1$ when the exact solution is a constant function. We show that the convergence rate is limited by the regularity of an extraneous Lagrange multiplier variable which naturally appears via a saddle-point analysis. In turn, limited convergence rates appear because the regularity of this Lagrange multiplier is determined, in part, by the geometry of the domain. Numerical experiments support our conclusions.
Data assisted reconstruction algorithms, incorporating trained neural networks, are a novel paradigm for solving inverse problems. One approach is to first apply a classical reconstruction method and then apply a neural network to improve its solution. Empirical evidence shows that such two-step methods provide high-quality reconstructions, but they lack a convergence analysis. In this paper we formalize the use of such two-step approaches with classical regularization theory. We propose data-consistent neural networks that we combine with classical regularization methods. This yields a data-driven regularization method for which we provide a full convergence analysis with respect to noise. Numerical simulations show that compared to standard two-step deep learning methods, our approach provides better stability with respect to structural changes in the test set, while performing similarly on test data similar to the training set. Our method provides a stable solution of inverse problems that exploits both the known nonlinear forward model as well as the desired solution manifold from data.
A novel orthogonalization-free method together with two specific algorithms are proposed to solve extreme eigenvalue problems. On top of gradient-based algorithms, the proposed algorithms modify the multi-column gradient such that earlier columns are decoupled from later ones. Global convergence to eigenvectors instead of eigenspace is guaranteed almost surely. Locally, algorithms converge linearly with convergence rate depending on eigengaps. Momentum acceleration, exact linesearch, and column locking are incorporated to further accelerate both algorithms and reduce their computational costs. We demonstrate the efficiency of both algorithms on several random matrices with different spectrum distribution and matrices from computational chemistry.
This paper provides an a~priori error analysis of a localized orthogonal decomposition method (LOD) for the numerical stochastic homogenization of a model random diffusion problem. If the uniformly elliptic and bounded random coefficient field of the model problem is stationary and satisfies a quantitative decorrelation assumption in form of the spectral gap inequality, then the expected $L^2$ error of the method can be estimated, up to logarithmic factors, by $H+(varepsilon/H)^{d/2}$; $varepsilon$ being the small correlation length of the random coefficient and $H$ the width of the coarse finite element mesh that determines the spatial resolution. The proof bridges recent results of numerical homogenization and quantitative stochastic homogenization.
Estimates of the generalization error are proved for a residual neural network with $L$ random Fourier features layers $bar z_{ell+1}=bar z_ell + mathrm{Re}sum_{k=1}^Kbar b_{ell k}e^{mathrm{i}omega_{ell k}bar z_ell}+ mathrm{Re}sum_{k=1}^Kbar c_{ell k}e^{mathrm{i}omega_{ell k}cdot x}$. An optimal distribution for the frequencies $(omega_{ell k},omega_{ell k})$ of the random Fourier features $e^{mathrm{i}omega_{ell k}bar z_ell}$ and $e^{mathrm{i}omega_{ell k}cdot x}$ is derived. This derivation is based on the corresponding generalization error for the approximation of the function values $f(x)$. The generalization error turns out to be smaller than the estimate ${|hat f|^2_{L^1(mathbb{R}^d)}}/{(KL)}$ of the generalization error for random Fourier features with one hidden layer and the same total number of nodes $KL$, in the case the $L^infty$-norm of $f$ is much less than the $L^1$-norm of its Fourier transform $hat f$. This understanding of an optimal distribution for random features is used to construct a new training method for a deep residual network. Promising performance of the proposed new algorithm is demonstrated in computational experiments.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا