Do you want to publish a course? Click here

Approximation error analysis of some deep backward schemes for nonlinear PDEs

135   0   0.0 ( 0 )
 Added by Maximilien Germain
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

Recently proposed numerical algorithms for solving high-dimensional nonlinear partial differential equations (PDEs) based on neural networks have shown their remarkable performance. We review some of them and study their convergence properties. The methods rely on probabilistic representation of PDEs by backward stochastic differential equations (BSDEs) and their iterated time discretization. Our proposed algorithm, called deep backward multistep scheme (MDBDP), is a machine learning version of the LSMDP scheme of Gobet, Turkedjiev (Math. Comp. 85, 2016). It estimates simultaneously by backward induction the solution and its gradient by neural networks through sequential minimizations of suitable quadratic loss functions that are performed by stochastic gradient descent. Our main theoretical contribution is to provide an approximation error analysis of the MDBDP scheme as well as the deep splitting (DS) scheme for semilinear PDEs designed in Beck, Becker, Cheridito, Jentzen, Neufeld (2019). We also supplement the error analysis of the DBDP scheme of Hur{e}, Pham, Warin (Math. Comp. 89, 2020). This yields notably convergence rate in terms of the number of neurons for a class of deep Lipschitz continuous GroupSort neural networks when the PDE is linear in the gradient of the solution for the MDBDP scheme, and in the semilinear case for the DBDP scheme. We illustrate our results with some numerical tests that are compared with some other machine learning algorithms in the literature.



rate research

Read More

94 - C^ome Hure 2019
We propose new machine learning schemes for solving high dimensional nonlinear partial differential equations (PDEs). Relying on the classical backward stochastic differential equation (BSDE) representation of PDEs, our algorithms estimate simultaneously the solution and its gradient by deep neural networks. These approximations are performed at each time step from the minimization of loss functions defined recursively by backward induction. The methodology is extended to variational inequalities arising in optimal stopping problems. We analyze the convergence of the deep learning schemes and provide error estimates in terms of the universal approximation of neural networks. Numerical results show that our algorithms give very good results till dimension 50 (and certainly above), for both PDEs and variational inequalities problems. For the PDEs resolution, our results are very similar to those obtained by the recent method in cite{weinan2017deep} when the latter converges to the right solution or does not diverge. Numerical tests indicate that the proposed methods are not stuck in poor local minimaas it can be the case with the algorithm designed in cite{weinan2017deep}, and no divergence is experienced. The only limitation seems to be due to the inability of the considered deep neural networks to represent a solution with a too complex structure in high dimension.
103 - Huyen Pham 2019
We propose a numerical method for solving high dimensional fully nonlinear partial differential equations (PDEs). Our algorithm estimates simultaneously by backward time induction the solution and its gradient by multi-layer neural networks, while the Hessian is approximated by automatic differentiation of the gradient at previous step. This methodology extends to the fully nonlinear case the approach recently proposed in cite{HPW19} for semi-linear PDEs. Numerical tests illustrate the performance and accuracy of our method on several examples in high dimension with nonlinearity on the Hessian term including a linear quadratic control problem with control on the diffusion coefficient, Monge-Amp{`e}re equation and Hamilton-Jacobi-Bellman equation in portfolio optimization.
Many physical systems are formulated on domains which are relatively large in some directions but relatively thin in other directions. We expect such systems to have emergent structures that vary slowly over the large dimensions. Common mathematical approximations for determining the emergent dynamics often rely on self-consistency arguments or limits as the aspect ratio of the `large and `thin dimensions becomes nonphysically infinite. Here we extend to nonlinear dynamics a new approach [IMA J. Appl. Maths, DOI: 10.1093/imamat/hxx021] which analyses the dynamics at each cross-section of the domain via a rigorous multivariate Taylor series. Then centre manifold theory supports the global modelling of the systems emergent dynamics in the large but finite domain. Interactions between the cross-section coupling and both fast and slow dynamics determines quantitative error bounds for the nonlinear modelling. We illustrate the methodology by deriving the large-scale dynamics of a thin liquid film, where the film is subject to a Coriolis force induced by a rotating substrate. The approach developed here quantifies the accuracy of known approximations, extends such approximations to mixed order modelling, and may open previously intractable modelling issues to new tools and insights.
In this work, we study the numerical approximation of a class of singular fully coupled forward backward stochastic differential equations. These equations have a degenerate forward component and non-smooth terminal condition. They are used, for example, in the modeling of carbon market[9] and are linked to scalar conservation law perturbed by a diffusion. Classical FBSDEs methods fail to capture the correct entropy solution to the associated quasi-linear PDE. We introduce a splitting approach that circumvent this difficulty by treating differently the numerical approximation of the diffusion part and the non-linear transport part. Under the structural condition guaranteeing the well-posedness of the singular FBSDEs [8], we show that the splitting method is convergent with a rate $1/2$. We implement the splitting scheme combining non-linear regression based on deep neural networks and conservative finite difference schemes. The numerical tests show very good results in possibly high dimensional framework.
Diffusion approximation provides weak approximation for stochastic gradient descent algorithms in a finite time horizon. In this paper, we introduce new tools motivated by the backward error analysis of numerical stochastic differential equations into the theoretical framework of diffusion approximation, extending the validity of the weak approximation from finite to infinite time horizon. The new techniques developed in this paper enable us to characterize the asymptotic behavior of constant-step-size SGD algorithms for strongly convex objective functions, a goal previously unreachable within the diffusion approximation framework. Our analysis builds upon a truncated formal power expansion of the solution of a stochastic modified equation arising from diffusion approximation, where the main technical ingredient is a uniform-in-time weak error bound controlling the long-term behavior of the expansion coefficient functions near the global minimum. We expect these new techniques to greatly expand the range of applicability of diffusion approximation to cover wider and deeper aspects of stochastic optimization algorithms in data science.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا