Sparse optimal stochastic control


Abstract in English

In this paper, we investigate a sparse optimal control of continuous-time stochastic systems. We adopt the dynamic programming approach and analyze the optimal control via the value function. Due to the non-smoothness of the $L^0$ cost functional, in general, the value function is not differentiable in the domain. Then, we characterize the value function as a viscosity solution to the associated Hamilton-Jacobi-Bellman (HJB) equation. Based on the result, we derive a necessary and sufficient condition for the $L^0$ optimality, which immediately gives the optimal feedback map. Especially for control-affine systems, we consider the relationship with $L^1$ optimal control problem and show an equivalence theorem.

Download