ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal controls of stochastic differential equations with jumps and random coefficients: Stochastic Hamilton-Jacobi-Bellman equations with jumps

90   0   0.0 ( 0 )
 نشر من قبل Yuchao Dong
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we study the following nonlinear backward stochastic integral partial differential equation with jumps begin{equation*} left{ begin{split} -d V(t,x) =&displaystyleinf_{uin U}bigg{H(t,x,u, DV(t,x),D Phi(t,x), D^2 V(t,x),int_E left(mathcal I V(t,e,x,u)+Psi(t,x+g(t,e,x,u))right)l(t,e) u(de)) &+displaystyleint_{E}big[mathcal I V(t,e,x,u)-displaystyle (g(t, e,x,u), D V(t,x))big] u(d e)+int_{E}big[mathcal I Psi(t,e,x,u)big] u(d e)bigg}dt &-Phi(t,x)dW(t)-displaystyleint_{E} Psi (t, e,x)tildemu(d e,dt), V(T,x)=& h(x), end{split} right. end{equation*} where $tilde mu$ is a Poisson random martingale measure, $W$ is a Brownian motion, and $mathcal I$ is a non-local operator to be specified later. The function $H$ is a given random mapping, which arises from a corresponding non-Markovian optimal control problem. This equation appears as the stochastic Hamilton-Jacobi-Bellman equation, which characterizes the value function of the optimal control problem with a recursive utility cost functional. The solution to the equation is a predictable triplet of random fields $(V,Phi,Psi)$. We show that the value function, under some regularity assumptions, is the solution to the stochastic HJB equation; and a classical solution to this equation is the value function and gives the optimal control. With some additional assumptions on the coefficients, an existence and uniqueness result in the sense of Sobolev space is shown by recasting the backward stochastic partial integral differential equation with jumps as a backward stochastic evolution equation in Hilbert spaces with Poisson jumps.



قيم البحث

اقرأ أيضاً

Stochastic symmetries and related invariance properties of finite dimensional SDEs driven by general cadlag semimartingales taking values in Lie groups are defined and investigated. The considered set of SDEs, first introduced by S. Cohen, includes a ffine and Marcus type SDEs as well as smooth SDEs driven by Levy processes and iterated random maps. A natural extension to this general setting of reduction and reconstruction theory for symmetric SDEs is provided. Our theorems imply as special cases non trivial invariance results concerning a class of affine iterated random maps as well as symmetries for numerical schemes (of Euler and Milstein type) for Brownian motion driven SDEs.
198 - Sudeep Kundu , Karl Kunisch 2020
Policy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we ana lyze the case with control constraints both for the HJB equations which arise in deterministic and in stochastic control cases. The linear equations in each iteration step are solved by an implicit upwind scheme. Numerical examples are conducted to solve the HJB equation with control constraints and comparisons are shown with the unconstrained cases.
239 - Shanjian Tang , Zhou Yang 2011
A Dynkin game is considered for stochastic differential equations with random coefficients. We first apply Qiu and Tangs maximum principle for backward stochastic partial differential equations to generalize Krylov estimate for the distribution of a Markov process to that of a non-Markov process, and establish a generalized It^o-Kunita-Wentzells formula allowing the test function to be a random field of It^os type which takes values in a suitable Sobolev space. We then prove the verification theorem that the Nash equilibrium point and the value of the Dynkin game are characterized by the strong solution of the associated Hamilton-Jacobi-Bellman-Isaacs equation, which is currently a backward stochastic partial differential variational inequality (BSPDVI, for short) with two obstacles. We obtain the existence and uniqueness result and a comparison theorem for strong solution of the BSPDVI. Moreover, we study the monotonicity on the strong solution of the BSPDVI by the comparison theorem for BSPDVI and define the free boundaries. Finally, we identify the counterparts for an optimal stopping time problem as a special Dynkin game.
We derive sufficient conditions for the differentiability of all orders for the flow of stochastic differential equations with jumps, and prove related $L^p$-integrability results for all orders. Our results extend similar results obtained in [Kun04] for first order differentiability and rely on the Burkholder-Davis-Gundy inequality for time inhomogeneous Poisson random measures on ${Bbb R}_+times {Bbb R}$, for which we provide a new proof.
A tensor decomposition approach for the solution of high-dimensional, fully nonlinear Hamilton-Jacobi-Bellman equations arising in optimal feedback control of nonlinear dynamics is presented. The method combines a tensor train approximation for the v alue function together with a Newton-like iterative method for the solution of the resulting nonlinear system. The tensor approximation leads to a polynomial scaling with respect to the dimension, partially circumventing the curse of dimensionality. A convergence analysis for the linear-quadratic case is presented. For nonlinear dynamics, the effectiveness of the high-dimensional control synthesis method is assessed in the optimal feedback stabilization of the Allen-Cahn and Fokker-Planck equations with a hundred of variables.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا