No Arabic abstract
- In this paper we introduce a new method to solve fixed-delay optimal control problems which exploits numerical homotopy procedures. It is known that solving this kind of problems via indirect methods is complex and computationally demanding because their implementation is faced with two difficulties: the extremal equations are of mixed type, and besides, the shooting method has to be carefully initialized. Here, starting from the solution of the non-delayed version of the optimal control problem, the delay is introduced by numerical homotopy methods. Convergence results, which ensure the effectiveness of the whole procedure, are provided. The numerical efficiency is illustrated on an example.
We investigate symmetry reduction of optimal control problems for left-invariant control systems on Lie groups, with partial symmetry breaking cost functions. Our approach emphasizes the role of variational principles and considers a discrete-time setting as well as the standard continuous-time formulation. Specifically, we recast the optimal control problem as a constrained variational problem with a partial symmetry breaking Lagrangian and obtain the Euler--Poincare equations from a variational principle. By applying a Legendre transformation to it, we recover the Lie-Poisson equations obtained by A. D. Borum [Masters Thesis, University of Illinois at Urbana-Champaign, 2015] in the same context. We also discretize the variational principle in time and obtain the discrete-time Lie-Poisson equations. We illustrate the theory with some practical examples including a motion planning problem in the presence of an obstacle.
We consider the linear quadratic Gaussian control problem with a discounted cost functional for descriptor systems on the infinite time horizon. Based on recent results from the deterministic framework, we characterize the feasibility of this problem using a linear matrix inequality. In particular, conditions for existence and uniqueness of optimal controls are derived, which are weaker compared to the standard approaches in the literature. We further show that also for the stochastic problem, the optimal control is given in terms of the stabilizing solution of the Lure equation, which generalizes the algebraic Riccati equation.
This paper is concerned with a backward stochastic linear-quadratic (LQ, for short) optimal control problem with deterministic coefficients. The weighting matrices are allowed to be indefinite, and cross-product terms in the control and state processes are present in the cost functional. Based on a Hilbert space method, necessary and sufficient conditions are derived for the solvability of the problem, and a general approach for constructing optimal controls is developed. The crucial step in this construction is to establish the solvability of a Riccati-type equation, which is accomplished under a fairly weak condition by investigating the connection with forward stochastic LQ optimal control problems.
This manuscript presents an algorithm for obtaining an approximation of nonlinear high order control affine dynamical systems, that leverages the controlled trajectories as the central unit of information. As the fundamental basis elements leveraged in approximation, higher order control occupation kernels represent iterated integration after multiplication by a given controller in a vector valued reproducing kernel Hilbert space. In a regularized regression setting, the unique optimizer for a particular optimization problem is expressed as a linear combination of these occupation kernels, which converts an infinite dimensional optimization problem to a finite dimensional optimization problem through the representer theorem. Interestingly, the vector valued structure of the Hilbert space allows for simultaneous approximation of the drift and control effectiveness components of the control affine system. Several experiments are performed to demonstrate the effectiveness of the approach.
This paper presents a new fast and robust algorithm that provides fuel-optimal impulsive control input sequences that drive a linear time-variant system to a desired state at a specified time. This algorithm is applicable to a broad class of problems where the cost is expressed as a time-varying norm-like function of the control input, enabling inclusion of complex operational constraints in the control planning problem. First, it is shown that the reachable sets for this problem have identical properties to those in prior works using constant cost functions, enabling use of existing algorithms in conjunction with newly derived contact and support functions. By reformulating the optimal control problem as a semi-infinite convex program, it is also demonstrated that the time-invariant component of the commonly studied primer vector is an outward normal vector to the reachable set at the target state. Using this formulation, a fast and robust algorithm that provides globally optimal impulsive control input sequences is proposed. The algorithm iteratively refines estimates of an outward normal vector to the reachable set at the target state and a minimal set of control input times until the optimality criteria are satisfied to within a user-specified tolerance. Next, optimal control inputs are computed by solving a quadratic program. The algorithm is validated through simulations of challenging example problems based on the recently proposed Miniaturized Distributed Occulter/Telescope small satellite mission, which demonstrate that the proposed algorithm converges several times faster than comparable algorithms in literature.