ترغب بنشر مسار تعليمي؟ اضغط هنا

Convergence of the Deep BSDE Method for Coupled FBSDEs

85   0   0.0 ( 0 )
 نشر من قبل Jiequn Han
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The recently proposed numerical algorithm, deep BSDE method, has shown remarkable performance in solving high-dimensional forward-backward stochastic differential equations (FBSDEs) and parabolic partial differential equations (PDEs). This article lays a theoretical foundation for the deep BSDE method in the general case of coupled FBSDEs. In particular, a posteriori error estimation of the solution is provided and it is proved that the error converges to zero given the universal approximation capability of neural networks. Numerical results are presented to demonstrate the accuracy of the analyzed algorithm in solving high-dimensional coupled FBSDEs.



قيم البحث

اقرأ أيضاً

Recently, the deep learning method has been used for solving forward-backward stochastic differential equations (FBSDEs) and parabolic partial differential equations (PDEs). It has good accuracy and performance for high-dimensional problems. In this paper, we mainly solve fully coupled FBSDEs through deep learning and provide three algorithms. Several numerical results show remarkable performance especially for high-dimensional cases.
138 - E. H. Essaky , M. Hassani 2010
In this paper, we are concerned with the problem of existence of solutions for generalized reflected backward stochastic differential equations (GRBSDEs for short) and generalized backward stochastic differential equations (GBSDEs for short) when the generator $fds + gdA_s$ is continuous with general growth with respect to the variable $y$ and stochastic quadratic growth with respect to the variable $z$. We deal with the case of a bounded terminal condition $xi$ and a bounded barrier $L$ as well as the case of unbounded ones. This is done by using the notion of generalized BSDEs with two reflecting barriers studied in cite{EH}. The work is suggested by the interest the results might have in finance, control and game theory.
In this paper, a deep collocation method (DCM) for thin plate bending problems is proposed. This method takes advantage of computational graphs and backpropagation algorithms involved in deep learning. Besides, the proposed DCM is based on a feedforw ard deep neural network (DNN) and differs from most previous applications of deep learning for mechanical problems. First, batches of randomly distributed collocation points are initially generated inside the domain and along the boundaries. A loss function is built with the aim that the governing partial differential equations (PDEs) of Kirchhoff plate bending problems, and the boundary/initial conditions are minimised at those collocation points. A combination of optimizers is adopted in the backpropagation process to minimize the loss function so as to obtain the optimal hyperparameters. In Kirchhoff plate bending problems, the C1 continuity requirement poses significant difficulties in traditional mesh-based methods. This can be solved by the proposed DCM, which uses a deep neural network to approximate the continuous transversal deflection, and is proved to be suitable to the bending analysis of Kirchhoff plate of various geometries.
112 - Tuyen Trung Truong 2021
In a recent joint work, the author has developed a modification of Newtons method, named New Q-Newtons method, which can avoid saddle points and has quadratic rate of convergence. While good theoretical convergence guarantee has not been established for this method, experiments on small scale problems show that the method works very competitively against other well known modifications of Newtons method such as Adaptive Cubic Regularization and BFGS, as well as first order methods such as Unbounded Two-way Backtracking Gradient Descent. In this paper, we resolve the convergence guarantee issue by proposing a modification of New Q-Newtons method, named New Q-Newtons method Backtracking, which incorporates a more sophisticated use of hyperparameters and a Backtracking line search. This new method has very good theoretical guarantees, which for a {bf Morse function} yields the following (which is unknown for New Q-Newtons method): {bf Theorem.} Let $f:mathbb{R}^mrightarrow mathbb{R}$ be a Morse function, that is all its critical points have invertible Hessian. Then for a sequence ${x_n}$ constructed by New Q-Newtons method Backtracking from a random initial point $x_0$, we have the following two alternatives: i) $lim_{nrightarrowinfty}||x_n||=infty$, or ii) ${x_n}$ converges to a point $x_{infty}$ which is a {bf local minimum} of $f$, and the rate of convergence is {bf quadratic}. Moreover, if $f$ has compact sublevels, then only case ii) happens. As far as we know, for Morse functions, this is the best theoretical guarantee for iterative optimization algorithms so far in the literature. We have tested in experiments on small scale, with some further simplifie
Monotone systems of polynomial equations (MSPEs) are systems of fixed-point equations $X_1 = f_1(X_1, ..., X_n),$ $..., X_n = f_n(X_1, ..., X_n)$ where each $f_i$ is a polynomial with positive real coefficients. The question of computing the least no n-negative solution of a given MSPE $vec X = vec f(vec X)$ arises naturally in the analysis of stochastic models such as stochastic context-free grammars, probabilistic pushdown automata, and back-button processes. Etessami and Yannakakis have recently adapted Newtons iterative method to MSPEs. In a previous paper we have proved the existence of a threshold $k_{vec f}$ for strongly connected MSPEs, such that after $k_{vec f}$ iterations of Newtons method each new iteration computes at least 1 new bit of the solution. However, the proof was purely existential. In this paper we give an upper bound for $k_{vec f}$ as a function of the minimal component of the least fixed-point $muvec f$ of $vec f(vec X)$. Using this result we show that $k_{vec f}$ is at most single exponential resp. linear for strongly connected MSPEs derived from probabilistic pushdown automata resp. from back-button processes. Further, we prove the existence of a threshold for arbitrary MSPEs after which each new iteration computes at least $1/w2^h$ new bits of the solution, where $w$ and $h$ are the width and height of the DAG of strongly connected components.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا