No Arabic abstract
We study the regularity properties of the value function associated with an affine optimal control problem with quadratic cost plus a potential, for a fixed final time and initial point. Without assuming any condition on singular minimizers, we prove that the value function is continuous on an open and dense subset of the interior of the attainable set. As a byproduct we obtain that it is actually smooth on a possibly smaller set, still open and dense.
An abstract framework guaranteeing the local continuous differentiability of the value function associated with optimal stabilization problems subject to abstract semilinear parabolic equations subject to a norm constraint on the controls is established. It guarantees that the value function satisfies the associated Hamilton-Jacobi-Bellman equation in the classical sense. The applicability of the developed framework is demonstrated for specific semilinear parabolic equations.
- In this paper we introduce a new method to solve fixed-delay optimal control problems which exploits numerical homotopy procedures. It is known that solving this kind of problems via indirect methods is complex and computationally demanding because their implementation is faced with two difficulties: the extremal equations are of mixed type, and besides, the shooting method has to be carefully initialized. Here, starting from the solution of the non-delayed version of the optimal control problem, the delay is introduced by numerical homotopy methods. Convergence results, which ensure the effectiveness of the whole procedure, are provided. The numerical efficiency is illustrated on an example.
We consider in this paper the regularity problem for time-optimal trajectories of a single-input control-affine system on a n-dimensional manifold. We prove that, under generic conditions on the drift and the controlled vector field, any control u associated with an optimal trajectory is smooth out of a countable set of times. More precisely, there exists an integer K, only depending on the dimension n, such that the non-smoothness set of u is made of isolated points, accumulations of isolated points, and so on up to K-th order iterated accumulations.
In this paper, we obtain several structural results for the value function associated to a mean-field optimal control problem of Bolza type in the space of measures. After establishing the sensitivity relations bridging between the costates of the maximum principle and metric superdifferentials of the value function, we investigate semiconcavity properties of this latter with respect to both variables. We then characterise optimal trajectories using set-valued feedback mappings defined in terms of suitable directional derivatives of the value function.
Optimal control problems with a very large time horizon can be tackled with the Receding Horizon Control (RHC) method, which consists in solving a sequence of optimal control problems with small prediction horizon. The main result of this article is the proof of the exponential convergence (with respect to the prediction horizon) of the control generated by the RHC method towards the exact solution of the problem. The result is established for a class of infinite-dimensional linear-quadratic optimal control problems with time-independent dynamics and integral cost. Such problems satisfy the turnpike property: the optimal trajectory remains most of the time very close to the solution to the associated static optimization problem. Specific terminal cost functions, derived from the Lagrange multiplier associated with the static optimization problem, are employed in the implementation of the RHC method.