Do you want to publish a course? Click here

Deep neural network approximation for high-dimensional elliptic PDEs with boundary conditions

136   0   0.0 ( 0 )
 Added by Lukas Herrmann
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In recent work it has been established that deep neural networks are capable of approximating solutions to a large class of parabolic partial differential equations without incurring the curse of dimension. However, all this work has been restricted to problems formulated on the whole Euclidean domain. On the other hand, most problems in engineering and the sciences are formulated on finite domains and subjected to boundary conditions. The present paper considers an important such model problem, namely the Poisson equation on a domain $Dsubset mathbb{R}^d$ subject to Dirichlet boundary conditions. It is shown that deep neural networks are capable of representing solutions of that problem without incurring the curse of dimension. The proofs are based on a probabilistic representation of the solution to the Poisson equation as well as a suitable sampling method.



rate research

Read More

The approximation of solutions to second order Hamilton--Jacobi--Bellman (HJB) equations by deep neural networks is investigated. It is shown that for HJB equations that arise in the context of the optimal control of certain Markov processes the solution can be approximated by deep neural networks without incurring the curse of dimension. The dynamics is assumed to depend affinely on the controls and the cost depends quadratically on the controls. The admissible controls take values in a bounded set.
Designing an optimal deep neural network for a given task is important and challenging in many machine learning applications. To address this issue, we introduce a self-adaptive algorithm: the adaptive network enhancement (ANE) method, written as loops of the form train, estimate and enhance. Starting with a small two-layer neural network (NN), the step train is to solve the optimization problem at the current NN; the step estimate is to compute a posteriori estimator/indicators using the solution at the current NN; the step enhance is to add new neurons to the current NN. Novel network enhancement strategies based on the computed estimator/indicators are developed in this paper to determine how many new neurons and when a new layer should be added to the current NN. The ANE method provides a natural process for obtaining a good initialization in training the current NN; in addition, we introduce an advanced procedure on how to initialize newly added neurons for a better approximation. We demonstrate that the ANE method can automatically design a nearly minimal NN for learning functions exhibiting sharp transitional layers as well as discontinuous solutions of hyperbolic partial differential equations.
119 - Min Liu , Zhiqiang Cai 2021
In this paper, we study adaptive neuron enhancement (ANE) method for solving self-adjoint second-order elliptic partial differential equations (PDEs). The ANE method is a self-adaptive method generating a two-layer spline NN and a numerical integration mesh such that the approximation accuracy is within the prescribed tolerance. Moreover, the ANE method provides a natural process for obtaining a good initialization which is crucial for training nonlinear optimization problem. The underlying PDE is discretized by the Ritz method using a two-layer spline neural network based on either the primal or dual formulations that minimize the respective energy or complimentary functionals. Essential boundary conditions are imposed weakly through the functionals with proper norms. It is proved that the Ritz approximation is the best approximation in the energy norm; moreover, effect of numerical integration for the Ritz approximation is analyzed as well. Two estimators for adaptive neuron enhancement method are introduced, one is the so-called recovery estimator and the other is the least-squares estimator. Finally, numerical results for diffusion problems with either corner or intersecting interface singularities are presented.
In this paper, we extend the class of kernel methods, the so-called diffusion maps (DM), and its local kernel variants, to approximate second-order differential operators defined on smooth manifolds with boundaries that naturally arise in elliptic PDE models. To achieve this goal, we introduce the Ghost Point Diffusion Maps (GPDM) estimator on an extended manifold, identified by the set of point clouds on the unknown original manifold together with a set of ghost points, specified along the estimated tangential direction at the sampled points at the boundary. The resulting GPDM estimator restricts the standard DM matrix to a set of extrapolation equations that estimates the function values at the ghost points. This adjustment is analogous to the classical ghost point method in finite-difference scheme for solving PDEs on flat domain. As opposed to the classical DM which diverges near the boundary, the proposed GPDM estimator converges pointwise even near the boundary. Applying the consistent GPDM estimator to solve the well-posed elliptic PDEs with classical boundary conditions (Dirichlet, Neumann, and Robin), we establish the convergence of the approximate solution under appropriate smoothness assumptions. We numerically validate the proposed mesh-free PDE solver on various problems defined on simple sub-manifolds embedded in Euclidean spaces as well as on an unknown manifold. Numerically, we also found that the GPDM is more accurate compared to DM in solving elliptic eigenvalue problems on bounded smooth manifolds.
Various phenomena in biology, physics, and engineering are modeled by differential equations. These differential equations including partial differential equations and ordinary differential equations can be converted and represented as integral equations. In particular, Volterra Fredholm Hammerstein integral equations are the main type of these integral equations and researchers are interested in investigating and solving these equations. In this paper, we propose Legendre Deep Neural Network (LDNN) for solving nonlinear Volterra Fredholm Hammerstein integral equations (VFHIEs). LDNN utilizes Legendre orthogonal polynomials as activation functions of the Deep structure. We present how LDNN can be used to solve nonlinear VFHIEs. We show using the Gaussian quadrature collocation method in combination with LDNN results in a novel numerical solution for nonlinear VFHIEs. Several examples are given to verify the performance and accuracy of LDNN.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا