ﻻ يوجد ملخص باللغة العربية
We propose a numerical method for solving high dimensional fully nonlinear partial differential equations (PDEs). Our algorithm estimates simultaneously by backward time induction the solution and its gradient by multi-layer neural networks, while the Hessian is approximated by automatic differentiation of the gradient at previous step. This methodology extends to the fully nonlinear case the approach recently proposed in cite{HPW19} for semi-linear PDEs. Numerical tests illustrate the performance and accuracy of our method on several examples in high dimension with nonlinearity on the Hessian term including a linear quadratic control problem with control on the diffusion coefficient, Monge-Amp{`e}re equation and Hamilton-Jacobi-Bellman equation in portfolio optimization.
This paper presents machine learning techniques and deep reinforcement learningbased algorithms for the efficient resolution of nonlinear partial differential equations and dynamic optimization problems arising in investment decisions and derivative
We propose new machine learning schemes for solving high dimensional nonlinear partial differential equations (PDEs). Relying on the classical backward stochastic differential equation (BSDE) representation of PDEs, our algorithms estimate simultaneo
Conventional research attributes the improvements of generalization ability of deep neural networks either to powerful optimizers or the new network design. Different from them, in this paper, we aim to link the generalization ability of a deep netwo
Nonlinear observers based on the well-known concept of minimum energy estimation are discussed. The approach relies on an output injection operator determined by a Hamilton-Jacobi-Bellman equation and is subsequently approximated by a neural network.
We derive a backward and forward nonlinear PDEs that govern the implied volatility of a contingent claim whenever the latter is well-defined. This would include at least any contingent claim written on a positive stock price whose payoff at a possibl