ترغب بنشر مسار تعليمي؟ اضغط هنا

Dynkin Game of Stochastic Differential Equations with Random Coefficients, and Associated Backward Stochastic Partial Differential Variational Inequality

248   0   0.0 ( 0 )
 نشر من قبل Zhou Yang
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A Dynkin game is considered for stochastic differential equations with random coefficients. We first apply Qiu and Tangs maximum principle for backward stochastic partial differential equations to generalize Krylov estimate for the distribution of a Markov process to that of a non-Markov process, and establish a generalized It^o-Kunita-Wentzells formula allowing the test function to be a random field of It^os type which takes values in a suitable Sobolev space. We then prove the verification theorem that the Nash equilibrium point and the value of the Dynkin game are characterized by the strong solution of the associated Hamilton-Jacobi-Bellman-Isaacs equation, which is currently a backward stochastic partial differential variational inequality (BSPDVI, for short) with two obstacles. We obtain the existence and uniqueness result and a comparison theorem for strong solution of the BSPDVI. Moreover, we study the monotonicity on the strong solution of the BSPDVI by the comparison theorem for BSPDVI and define the free boundaries. Finally, we identify the counterparts for an optimal stopping time problem as a special Dynkin game.



قيم البحث

اقرأ أيضاً

161 - Ying Hu 2013
This paper is concerned with the switching game of a one-dimensional backward stochastic differential equation (BSDE). The associated Bellman-Isaacs equation is a system of matrix-valued BSDEs living in a special unbounded convex domain with reflecti on on the boundary along an oblique direction. In this paper, we show the existence of an adapted solution to this system of BSDEs with oblique reflection by the penalization method, the monotone convergence, and the a priori estimates.
In [5] the authors obtained Mean-Field backward stochastic differential equations (BSDE) associated with a Mean-field stochastic differential equation (SDE) in a natural way as limit of some highly dimensional system of forward and backward SDEs, cor responding to a large number of ``particles (or ``agents). The objective of the present paper is to deepen the investigation of such Mean-Field BSDEs by studying them in a more general framework, with general driver, and to discuss comparison results for them. In a second step we are interested in partial differential equations (PDE) whose solutions can be stochastically interpreted in terms of Mean-Field BSDEs. For this we study a Mean-Field BSDE in a Markovian framework, associated with a Mean-Field forward equation. By combining classical BSDE methods, in particular that of ``backward semigroups introduced by Peng [14], with specific arguments for Mean-Field BSDEs we prove that this Mean-Field BSDE describes the viscosity solution of a nonlocal PDE. The uniqueness of this viscosity solution is obtained for the space of continuous functions with polynomial growth. With the help of an example it is shown that for the nonlocal PDEs associated to Mean-Field BSDEs one cannot expect to have uniqueness in a larger space of continuous functions.
In this paper, we study the following nonlinear backward stochastic integral partial differential equation with jumps begin{equation*} left{ begin{split} -d V(t,x) =&displaystyleinf_{uin U}bigg{H(t,x,u, DV(t,x),D Phi(t,x), D^2 V(t,x),int_E left(mathc al I V(t,e,x,u)+Psi(t,x+g(t,e,x,u))right)l(t,e) u(de)) &+displaystyleint_{E}big[mathcal I V(t,e,x,u)-displaystyle (g(t, e,x,u), D V(t,x))big] u(d e)+int_{E}big[mathcal I Psi(t,e,x,u)big] u(d e)bigg}dt &-Phi(t,x)dW(t)-displaystyleint_{E} Psi (t, e,x)tildemu(d e,dt), V(T,x)=& h(x), end{split} right. end{equation*} where $tilde mu$ is a Poisson random martingale measure, $W$ is a Brownian motion, and $mathcal I$ is a non-local operator to be specified later. The function $H$ is a given random mapping, which arises from a corresponding non-Markovian optimal control problem. This equation appears as the stochastic Hamilton-Jacobi-Bellman equation, which characterizes the value function of the optimal control problem with a recursive utility cost functional. The solution to the equation is a predictable triplet of random fields $(V,Phi,Psi)$. We show that the value function, under some regularity assumptions, is the solution to the stochastic HJB equation; and a classical solution to this equation is the value function and gives the optimal control. With some additional assumptions on the coefficients, an existence and uniqueness result in the sense of Sobolev space is shown by recasting the backward stochastic partial integral differential equation with jumps as a backward stochastic evolution equation in Hilbert spaces with Poisson jumps.
126 - Wenning Wei 2013
In this paper we are concerned with a new type of backward equations with anticipation which we call neutral backward stochastic functional differential equations. We obtain the existence and uniqueness and prove a comparison theorem. As an applicati on, we discuss the optimal control of neutral stochastic functional differential equations, establish a Pontryagin maximum principle, and give an explicit optimal value for the linear optimal control.
We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: (i) The controller has access to inside information, i.e. access to information about a future state of the system, (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases: (1) When the control is allowed to depend both on time t and on the space variable x. (2) When the control is not allowed to depend on x. In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا