No Arabic abstract
We develop a discrete-time optimal control framework for systems evolving on Lie groups. Our work generalizes the original Differential Dynamic Programming method, by employing a coordinate-free, Lie-theoretic approach for its derivation. A key element lies, specifically, in the use of quadratic expansion schemes for cost functions and dynamics defined on manifolds. The obtained algorithm iteratively optimizes local approximations of the control problem, until reaching a (sub)optimal solution. On the theoretical side, we also study the conditions under which convergence is attained. Details about the behavior and implementation of our method are provided through a simulated example on T SO(3).
We consider the optimal control problem of a general nonlinear spatio-temporal system described by Partial Differential Equations (PDEs). Theory and algorithms for control of spatio-temporal systems are of rising interest among the automatic control community and exhibit numerous challenging characteristic from a control standpoint. Recent methods focus on finite-dimensional optimization techniques of a discretized finite dimensional ODE approximation of the infinite dimensional PDE system. In this paper, we derive a differential dynamic programming (DDP) framework for distributed and boundary control of spatio-temporal systems in infinite dimensions that is shown to generalize both the spatio-temporal LQR solution, and modern finite dimensional DDP frameworks. We analyze the convergence behavior and provide a proof of global convergence for the resulting system of continuous-time forward-backward equations. We explore and develop numerical approaches to handle sensitivities that arise during implementation, and apply the resulting STDDP algorithm to a linear and nonlinear spatio-temporal PDE system. Our framework is derived in infinite dimensional Hilbert spaces, and represents a discretization-agnostic framework for control of nonlinear spatio-temporal PDE systems.
This paper discusses the odds problem, proposed by Bruss in 2000, and its variants. A recurrence relation called a dynamic programming (DP) equation is used to find an optimal stopping policy of the odds problem and its variants. In 2013, Buchbinder, Jain, and Singh proposed a linear programming (LP) formulation for finding an optimal stopping policy of the classical secretary problem, which is a special case of the odds problem. The proposed linear programming problem, which maximizes the probability of a win, differs from the DP equations known for long time periods. This paper shows that an ordinary DP equation is a modification of the dual problem of linear programming including the LP formulation proposed by Buchbinder, Jain, and Singh.
In contrast to the Euler-Poincar{e} reduction of geodesic flows of left- or right-invariant metrics on Lie groups to the corresponding Lie algebra (or its dual), one can consider the reduction of the geodesic flows to the group itself. The reduced vector field has a remarkable hydrodynamic interpretation: it is a velocity field for a stationary flow of an ideal fluid. Right- or left-invariant symmetry fields of the reduced field define vortex manifolds for such flows. Consider now a mechanical system, whose configuration space is a Lie group and whose Lagrangian is invariant to left translations on that group, and assume that the mass geometry of the system may change under the action of internal control forces. Such system can also be reduced to the Lie group. With no controls, this mechanical system describes a geodesic flow of the left-invariant metric, given by the Lagrangian, and thus its reduced flow is a stationary ideal fluid flow on the Lie group. The standard control problem for such system is to find the conditions, under which the system can be brought from any initial position in the configuration space to another preassigned position by changing its mass geometry. We show that under these conditions, by changing the mass geometry, one can also bring one vortex manifold to any other preassigned vortex manifold.
In this paper we give a geometrical framework for the design of observers on finite-dimensional Lie groups for systems which possess some specific symmetries. The design and the error (between true and estimated state) equation are explicit and intrinsic. We consider also a particular case: left-invariant systems on Lie groups with right equivariant output. The theory yields a class of observers such that error equation is autonomous. The observers converge locally around any trajectory, and the global behavior is independent from the trajectory, which reminds of the linear stationary case.
We consider the revenue management problem of finding profit-maximising prices for delivery time slots in the context of attended home delivery. This multi-stage optimal control problem admits a dynamic programming formulation that is intractable for realistic problem sizes due to the so-called curse of dimensionality. Therefore, we study three approximate dynamic programming algorithms both from a control-theoretical perspective and in a parametric numerical case study. Our numerical analysis is based on real-world data, from which we generate multiple scenarios to stress-test the robustness of the pricing policies to errors in model parameter estimates. Our theoretical analysis and numerical benchmark tests show that one of these algorithms, namely gradient-bounded dynamic programming, dominates the others with respect to computation time and profit-generation capabilities of the delivery slot pricing policies that it generates. Finally, we show that uncertainty in the estimates of the model parameters further increases the profit-generation dominance of this approach.