ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine learning methods for solving nonlinear partial differential equations (PDEs) are hot topical issues, and different algorithms proposed in the literature show efficient numerical approximation in high dimension. In this paper, we introduce a c lass of PDEs that are invariant to permutations, and called symmetric PDEs. Such problems are widespread, ranging from cosmology to quantum mechanics, and option pricing/hedging in multi-asset market with exchangeable payoff. Our main application comes actually from the particles approximation of mean-field control problems. We design deep learning algorithms based on certain types of neural networks, named PointNet and DeepSet (and their associated derivative networks), for computing simultaneously an approximation of the solution and its gradient to symmetric PDEs. We illustrate the performance and accuracy of the PointNet/DeepSet networks compared to classical feedforward ones, and provide several numerical results of our algorithm for the examples of a mean-field systemic risk, mean-variance problem and a min/max linear quadratic McKean-Vlasov control problem.
We prove a rate of convergence for the $N$-particle approximation of a second-order partial differential equation in the space of probability measures, like the Master equation or Bellman equation of mean-field control problem under common noise. The rate is of order $1/N$ for the pathwise error on the solution $v$ and of order $1/sqrt{N}$ for the $L^2$-error on its $L$-derivative $partial_mu v$. The proof relies on backward stochastic differential equations techniques.
This paper presents machine learning techniques and deep reinforcement learningbased algorithms for the efficient resolution of nonlinear partial differential equations and dynamic optimization problems arising in investment decisions and derivative pricing in financial engineering. We survey recent results in the literature, present new developments, notably in the fully nonlinear case, and compare the different schemes illustrated by numerical tests on various financial applications. We conclude by highlighting some future research directions.
Recently proposed numerical algorithms for solving high-dimensional nonlinear partial differential equations (PDEs) based on neural networks have shown their remarkable performance. We review some of them and study their convergence properties. The m ethods rely on probabilistic representation of PDEs by backward stochastic differential equations (BSDEs) and their iterated time discretization. Our proposed algorithm, called deep backward multistep scheme (MDBDP), is a machine learning version of the LSMDP scheme of Gobet, Turkedjiev (Math. Comp. 85, 2016). It estimates simultaneously by backward induction the solution and its gradient by neural networks through sequential minimizations of suitable quadratic loss functions that are performed by stochastic gradient descent. Our main theoretical contribution is to provide an approximation error analysis of the MDBDP scheme as well as the deep splitting (DS) scheme for semilinear PDEs designed in Beck, Becker, Cheridito, Jentzen, Neufeld (2019). We also supplement the error analysis of the DBDP scheme of Hur{e}, Pham, Warin (Math. Comp. 89, 2020). This yields notably convergence rate in terms of the number of neurons for a class of deep Lipschitz continuous GroupSort neural networks when the PDE is linear in the gradient of the solution for the MDBDP scheme, and in the semilinear case for the DBDP scheme. We illustrate our results with some numerical tests that are compared with some other machine learning algorithms in the literature.
We propose several algorithms to solve McKean-Vlasov Forward Backward Stochastic Differential Equations. Our schemes rely on the approximating power of neural networks to estimate the solution or its gradient through minimization problems. As a conse quence, we obtain methods able to tackle both mean field games and mean field control problems in moderate dimension. We analyze the numerical behavior of our algorithms on several examples including non linear quadratic models.
In this paper we present a scheme for the numerical solution of one-dimensional stochastic differential equations (SDEs) whose drift belongs to a fractional Sobolev space of negative regularity (a subspace of Schwartz distributions). We obtain a rate of convergence in a suitable $L^1$-norm and we implement the scheme numerically. To the best of our knowledge this is the first paper to study (and implement) numerical solutions of SDEs whose drift lives in a space of distributions. As a byproduct we also obtain an estimate of the convergence rate for a numerical scheme applied to SDEs with drift in $L^p$-spaces with $pin(1,infty)$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا