Do you want to publish a course? Click here

Adaptive Two-Layer ReLU Neural Network: II. Ritz Approximation to Elliptic PDEs

120   0   0.0 ( 0 )
 Added by Min Liu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this paper, we study adaptive neuron enhancement (ANE) method for solving self-adjoint second-order elliptic partial differential equations (PDEs). The ANE method is a self-adaptive method generating a two-layer spline NN and a numerical integration mesh such that the approximation accuracy is within the prescribed tolerance. Moreover, the ANE method provides a natural process for obtaining a good initialization which is crucial for training nonlinear optimization problem. The underlying PDE is discretized by the Ritz method using a two-layer spline neural network based on either the primal or dual formulations that minimize the respective energy or complimentary functionals. Essential boundary conditions are imposed weakly through the functionals with proper norms. It is proved that the Ritz approximation is the best approximation in the energy norm; moreover, effect of numerical integration for the Ritz approximation is analyzed as well. Two estimators for adaptive neuron enhancement method are introduced, one is the so-called recovery estimator and the other is the least-squares estimator. Finally, numerical results for diffusion problems with either corner or intersecting interface singularities are presented.



rate research

Read More

In this paper, we introduce adaptive neuron enhancement (ANE) method for the best least-squares approximation using two-layer ReLU neural networks (NNs). For a given function f(x), the ANE method generates a two-layer ReLU NN and a numerical integration mesh such that the approximation accuracy is within the prescribed tolerance. The ANE method provides a natural process for obtaining a good initialization which is crucial for training nonlinear optimization problems. Numerical results of the ANE method are presented for functions of two variables exhibiting either intersecting interface singularities or sharp interior layers.
Designing an optimal deep neural network for a given task is important and challenging in many machine learning applications. To address this issue, we introduce a self-adaptive algorithm: the adaptive network enhancement (ANE) method, written as loops of the form train, estimate and enhance. Starting with a small two-layer neural network (NN), the step train is to solve the optimization problem at the current NN; the step estimate is to compute a posteriori estimator/indicators using the solution at the current NN; the step enhance is to add new neurons to the current NN. Novel network enhancement strategies based on the computed estimator/indicators are developed in this paper to determine how many new neurons and when a new layer should be added to the current NN. The ANE method provides a natural process for obtaining a good initialization in training the current NN; in addition, we introduce an advanced procedure on how to initialize newly added neurons for a better approximation. We demonstrate that the ANE method can automatically design a nearly minimal NN for learning functions exhibiting sharp transitional layers as well as discontinuous solutions of hyperbolic partial differential equations.
In recent work it has been established that deep neural networks are capable of approximating solutions to a large class of parabolic partial differential equations without incurring the curse of dimension. However, all this work has been restricted to problems formulated on the whole Euclidean domain. On the other hand, most problems in engineering and the sciences are formulated on finite domains and subjected to boundary conditions. The present paper considers an important such model problem, namely the Poisson equation on a domain $Dsubset mathbb{R}^d$ subject to Dirichlet boundary conditions. It is shown that deep neural networks are capable of representing solutions of that problem without incurring the curse of dimension. The proofs are based on a probabilistic representation of the solution to the Poisson equation as well as a suitable sampling method.
This paper studies numerical methods for the approximation of elliptic PDEs with lognormal coefficients of the form $-{rm div}(a abla u)=f$ where $a=exp(b)$ and $b$ is a Gaussian random field. The approximant of the solution $u$ is an $n$-term polynomial expansion in the scalar Gaussian random variables that parametrize $b$. We present a general convergence analysis of weighted least-squares approximants for smooth and arbitrarily rough random field, using a suitable random design, for which we prove optimality in the following sense: their convergence rate matches exactly or closely the rate that has been established in cite{BCDM} for best $n$-term approximation by Hermite polynomials, under the same minimial assumptions on the Gaussian random field. This is in contrast with the current state of the art results for the stochastic Galerkin method that suffers the lack of coercivity due to the lognormal nature of the diffusion field. Numerical tests with $b$ as the Brownian bridge confirm our theoretical findings.
127 - Yifan Chen , Thomas Y. Hou 2020
There is an intimate connection between numerical upscaling of multiscale PDEs and scattered data approximation of heterogeneous functions: the coarse variables selected for deriving an upscaled equation (in the former) correspond to the sampled information used for approximation (in the latter). As such, both problems can be thought of as recovering a target function based on some coarse data that are either artificially chosen by an upscaling algorithm, or determined by some physical measurement process. The purpose of this paper is then to study that, under such a setup and for a specific elliptic problem, how the lengthscale of the coarse data, which we refer to as the subsampled lengthscale, influences the accuracy of recovery, given limited computational budgets. Our analysis and experiments identify that, reducing the subsampling lengthscale may improve the accuracy, implying a guiding criterion for coarse-graining or data acquisition in this computationally constrained scenario, especially leading to direct insights for the implementation of the Gamblets method in the numerical homogenization literature. Moreover, reducing the lengthscale to zero may lead to a blow-up of approximation error if the target function does not have enough regularity, suggesting the need for a stronger prior assumption on the target function to be approximated. We introduce a singular weight function to deal with it, both theoretically and numerically. This work sheds light on the interplay of the lengthscale of coarse data, the computational costs, the regularity of the target function, and the accuracy of approximations and numerical simulations.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا