ﻻ يوجد ملخص باللغة العربية
In this paper, we study a stochastic recursive optimal control problem in which the value functional is defined by the solution of a backward stochastic differential equation (BSDE) under $tilde{G}$-expectation. Under standard assumptions, we establish the comparison theorem for this kind of BSDE and give a novel and simple method to obtain the dynamic programming principle. Finally, we prove that the value function is the unique viscosity solution of a type of fully nonlinear HJB equation.
A tensor decomposition approach for the solution of high-dimensional, fully nonlinear Hamilton-Jacobi-Bellman equations arising in optimal feedback control of nonlinear dynamics is presented. The method combines a tensor train approximation for the v
Computing optimal feedback controls for nonlinear systems generally requires solving Hamilton-Jacobi-Bellman (HJB) equations, which are notoriously difficult when the state dimension is large. Existing strategies for high-dimensional problems often r
Policy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we ana
We prove existence and uniqueness of Crandall-Lions viscosity solutions of Hamilton-Jacobi-Bellman equations in the space of continuous paths, associated to the optimal control of path-dependent SDEs. This seems the first uniqueness result in such a
A novel method for computing reachable sets is proposed in this paper. In the proposed method, a Hamilton-Jacobi-Bellman equation with running cost functionis numerically solved and the reachable sets of different time horizons are characterized by a