Do you want to publish a course? Click here

Path-Dependent Optimal Stochastic Control and Viscosity Solution of Associated Bellman Equations

242   0   0.0 ( 0 )
 Added by Fu Zhang
 Publication date 2012
  fields
and research's language is English




Ask ChatGPT about the research

In this paper we study the optimal stochastic control problem for a path-dependent stochastic system under a recursive path-dependent cost functional, whose associated Bellman equation from dynamic programming principle is a path-dependent fully nonlinear partial differential equation of second order. A novel notion of viscosity solutions is introduced. Using Dupires functional It^o calculus, we characterize the value functional of the optimal stochastic control problem as the unique viscosity solution to the associated path-dependent Bellman equation.



rate research

Read More

In this paper, we study the following nonlinear backward stochastic integral partial differential equation with jumps begin{equation*} left{ begin{split} -d V(t,x) =&displaystyleinf_{uin U}bigg{H(t,x,u, DV(t,x),D Phi(t,x), D^2 V(t,x),int_E left(mathcal I V(t,e,x,u)+Psi(t,x+g(t,e,x,u))right)l(t,e) u(de)) &+displaystyleint_{E}big[mathcal I V(t,e,x,u)-displaystyle (g(t, e,x,u), D V(t,x))big] u(d e)+int_{E}big[mathcal I Psi(t,e,x,u)big] u(d e)bigg}dt &-Phi(t,x)dW(t)-displaystyleint_{E} Psi (t, e,x)tildemu(d e,dt), V(T,x)=& h(x), end{split} right. end{equation*} where $tilde mu$ is a Poisson random martingale measure, $W$ is a Brownian motion, and $mathcal I$ is a non-local operator to be specified later. The function $H$ is a given random mapping, which arises from a corresponding non-Markovian optimal control problem. This equation appears as the stochastic Hamilton-Jacobi-Bellman equation, which characterizes the value function of the optimal control problem with a recursive utility cost functional. The solution to the equation is a predictable triplet of random fields $(V,Phi,Psi)$. We show that the value function, under some regularity assumptions, is the solution to the stochastic HJB equation; and a classical solution to this equation is the value function and gives the optimal control. With some additional assumptions on the coefficients, an existence and uniqueness result in the sense of Sobolev space is shown by recasting the backward stochastic partial integral differential equation with jumps as a backward stochastic evolution equation in Hilbert spaces with Poisson jumps.
We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: (i) The controller has access to inside information, i.e. access to information about a future state of the system, (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases: (1) When the control is allowed to depend both on time t and on the space variable x. (2) When the control is not allowed to depend on x. In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples.
This paper studies a class of non$-$Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a $Z-$constrained BSDE, with dynamics associated to a non singular underlying forward process. Due to the non$-$Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to $Z-$constrained BSDEs. As an application, we study the utility maximisation problem with transaction costs for non$-$Markovian dynamics.
81 - Andrea Cosso 2021
We prove existence and uniqueness of Crandall-Lions viscosity solutions of Hamilton-Jacobi-Bellman equations in the space of continuous paths, associated to the optimal control of path-dependent SDEs. This seems the first uniqueness result in such a context. More precisely, similarly to the seminal paper of P.L. Lions, the proof of our core result, that is the comparison theorem, is based on the fact that the value function is bigger than any viscosity subsolution and smaller than any viscosity supersolution. Such a result, coupled with the proof that the value function is a viscosity solution (based on the dynamic programming principle, which we prove), implies that the value function is the unique viscosity solution to the Hamilton-Jacobi-Bellman equation. The proof of the comparison theorem in P.L. Lions paper, relies on regularity results which are missing in the present infinite-dimensional context, as well as on the local compactness of the finite-dimensional underlying space. We overcome such non-trivial technical difficulties introducing a suitable approximating procedure and a smooth gauge-type function, which allows to generate maxima and minima through an appropriate version of the Borwein-Preiss generalization of Ekelands variational principle on the space of continuous paths.
We study the optimal control of path-dependent McKean-Vlasov equations valued in Hilbert spaces motivated by non Markovian mean-field models driven by stochastic PDEs. We first establish the well-posedness of the state equation, and then we prove the dynamic programming principle (DPP) in such a general framework. The crucial law invariance property of the value function V is rigorously obtained, which means that V can be viewed as a function on the Wasserstein space of probability measures on the set of continuous functions valued in Hilbert space. We then define a notion of pathwise measure derivative, which extends the Wasserstein derivative due to Lions [41], and prove a related functional It{^o} formula in the spirit of Dupire [24] and Wu and Zhang [51]. The Master Bellman equation is derived from the DPP by means of a suitable notion of viscosity solution. We provide different formulations and simplifications of such a Bellman equation notably in the special case when there is no dependence on the law of the control.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا