ترغب بنشر مسار تعليمي؟ اضغط هنا

In this work, we study the numerical approximation of a class of singular fully coupled forward backward stochastic differential equations. These equations have a degenerate forward component and non-smooth terminal condition. They are used, for exam ple, in the modeling of carbon market[9] and are linked to scalar conservation law perturbed by a diffusion. Classical FBSDEs methods fail to capture the correct entropy solution to the associated quasi-linear PDE. We introduce a splitting approach that circumvent this difficulty by treating differently the numerical approximation of the diffusion part and the non-linear transport part. Under the structural condition guaranteeing the well-posedness of the singular FBSDEs [8], we show that the splitting method is convergent with a rate $1/2$. We implement the splitting scheme combining non-linear regression based on deep neural networks and conservative finite difference schemes. The numerical tests show very good results in possibly high dimensional framework.
Relying on the classical connection between Backward Stochastic Differential Equations (BSDEs) and non-linear parabolic partial differential equations (PDEs), we propose a new probabilistic learning scheme for solving high-dimensional semi-linear par abolic PDEs. This scheme is inspired by the approach coming from machine learning and developed using deep neural networks in Han and al. [32]. Our algorithm is based on a Picard iteration scheme in which a sequence of linear-quadratic optimisation problem is solved by means of stochastic gradient descent (SGD) algorithm. In the framework of a linear specification of the approximation space, we manage to prove a convergence result for our scheme, under some smallness condition. In practice, in order to be able to treat high-dimensional examples, we employ sparse grid approximation spaces. In the case of periodic coefficients and using pre-wavelet basis functions, we obtain an upper bound on the global complexity of our method. It shows in particular that the curse of dimensionality is tamed in the sense that in order to achieve a root mean squared error of order ${epsilon}$, for a prescribed precision ${epsilon}$, the complexity of the Picard algorithm grows polynomially in ${epsilon}^{-1}$ up to some logarithmic factor $ |log({epsilon})| $ which grows linearly with respect to the PDE dimension. Various numerical results are presented to validate the performance of our method and to compare them with some recent machine learning schemes proposed in Han and al. [20] and Hure and al. [37].
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا