No Arabic abstract
An optimal control problem with a time-parameter is considered. The functional to be optimized includes the maximum over time-horizon reached by a function of the state variable, and so an $L^infty$-term. In addition to the classical control function, the time at which this maximum is reached is considered as a free parameter. The problem couples the behavior of the state and the control, with this time-parameter. A change of variable is introduced to derive first and second-order optimality conditions. This allows the implementation of a Newton method. Numerical simulations are developed, for selected ordinary differential equations and a partial differential equation, which illustrate the influence of the additional parameter and the original motivation.
We consider a stochastic control problem which is composed of a controlled stochastic differential equation, and whose associated cost functional is defined through a controlled backward stochastic differential equation. Under appropriate convexity assumptions on the coefficients of the forward and the backward equations we prove the existence of an optimal control on a suitable reference stochastic system. The proof is based on an approximation of the stochastic control problem by a sequence of control problems with smooth coefficients, admitting an optimal feedback control. The quadruplet formed by this optimal feedback control and the associated solution of the forward and the backward equations is shown to converge in law, at least along a subsequence. The convexity assumptions on the coefficients then allow to construct from this limit an admissible control process which, on an appropriate reference stochastic system, is optimal for our stochastic control problem.
We study a class of infinite-dimensional singular stochastic control problems with applications in economic theory and finance. The control process linearly affects an abstract evolution equation on a suitable partially-ordered infinite-dimensional space X, it takes values in the positive cone of X, and it has right-continuous and nondecreasing paths. We first provide a rigorous formulation of the problem by properly defining the controlled dynamics and integrals with respect to the control process. We then exploit the concave structure of our problem and derive necessary and sufficient first-order conditions for optimality. The latter are finally exploited in a specification of the model where we find an explicit expression of the optimal control. The techniques used are those of semigroup theory, vector-valued integration, convex analysis, and general theory of stochastic processes.
In this paper we study, by probabilistic techniques, the convergence of the value function for a two-scale, infinite-dimensional, stochastic controlled system as the ratio between the two evolution speeds diverges. The value function is represented as the solution of a backward stochastic differential equation (BSDE) that it is shown to converge towards a reduced BSDE. The noise is assumed to be additive both in the slow and the fast equations for the state. Some non degeneracy condition on the slow equation are required. The limit BSDE involves the solution of an ergodic BSDE and is itself interpreted as the value function of an auxiliary stochastic control problem on a reduced state space.
In this paper, the optimal control problem of neutral stochastic functional differential equation (NSFDE) is discussed. A class of so-called neutral backward stochastic functional equations of Volterra type (VNBSFEs) are introduced as the adjoint equation. The existence and uniqueness of VNBSFE is established. The Pontryagin maximum principle is constructed for controlled NSFDE with Lagrange type cost functional.
- In this paper we introduce a new method to solve fixed-delay optimal control problems which exploits numerical homotopy procedures. It is known that solving this kind of problems via indirect methods is complex and computationally demanding because their implementation is faced with two difficulties: the extremal equations are of mixed type, and besides, the shooting method has to be carefully initialized. Here, starting from the solution of the non-delayed version of the optimal control problem, the delay is introduced by numerical homotopy methods. Convergence results, which ensure the effectiveness of the whole procedure, are provided. The numerical efficiency is illustrated on an example.