No Arabic abstract
This paper introduces a framework for solving time-autonomous nonlinear infinite horizon optimal control problems, under the assumption that all minimizers satisfy Pontryagins necessary optimality conditions. In detail, we use methods from the field of symplectic geometry to analyze the eigenvalues of a Koopman operator that lifts Pontryagins differential equation into a suitably defined infinite dimensional symplectic space. This has the advantage that methods from the field of spectral analysis can be used to characterize globally optimal control laws. A numerical method for constructing optimal feedback laws for nonlinear systems is then obtained by computing the eigenvalues and eigenvectors of a matrix that is obtained by projecting the Pontryagin-Koopman operator onto a finite dimensional space. We illustrate the effectiveness of this approach by computing accurate approximations of the optimal nonlinear feedback law for a Van der Pol control system, which cannot be stabilized by a linear control law.
In many applications, and in systems/synthetic biology, in particular, it is desirable to compute control policies that force the trajectory of a bistable system from one equilibrium (the initial point) to another equilibrium (the target point), or in other words to solve the switching problem. It was recently shown that, for monotone bistable systems, this problem admits easy-to-implement open-loop solutions in terms of temporal pulses (i.e., step functions of fixed length and fixed magnitude). In this paper, we develop this idea further and formulate a problem of convergence to an equilibrium from an arbitrary initial point. We show that this problem can be solved using a static optimization problem in the case of monotone systems. Changing the initial point to an arbitrary state allows to build closed-loop, event-based or open-loop policies for the switching/convergence problems. In our derivations we exploit the Koopman operator, which offers a linear infinite-dimensional representation of an autonomous nonlinear system. One of the main advantages of using the Koopman operator is the powerful computational tools developed for this framework. Besides the presence of numerical solutions, the switching/convergence problem can also serve as a building block for solving more complicated control problems and can potentially be applied to non-monotone systems. We illustrate this argument on the problem of synchronizing cardiac cells by defibrillation. Potentially, our approach can be extended to problems with different parametrizations of control signals since the only fundamental limitation is the finite time application of the control signal.
This paper addresses the problem of control synthesis for nonlinear optimal control problems in the presence of state and input constraints. The presented approach relies upon transforming the given problem into an infinite-dimensional linear program over the space of measures. To generate approximations to this infinite-dimensional program, a sequence of Semi-Definite Programs (SDP)s is formulated in the instance of polynomial cost and dynamics with semi-algebraic state and bounded input constraints. A method to extract a polynomial control function from each SDP is also given. This paper proves that the controller synthesized from each of these SDPs generates a sequence of values that converge from below to the value of the optimal control of the original optimal control problem. In contrast to existing approaches, the presented method does not assume that the optimal control is continuous while still proving that the sequence of approximations is optimal. Moreover, the sequence of controllers that are synthesized using the presented approach are proven to converge to the true optimal control. The performance of the presented method is demonstrated on three examples.
Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5* less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.
A mean-field selective optimal control problem of multipopulation dynamics via transient leadership is considered. The agents in the system are described by their spatial position and their probability of belonging to a certain population. The dynamics in the control problem is characterized by the presence of an activation function which tunes the control on each agent according to the membership to a population, which, in turn, evolves according to a Markov-type jump process. This way, a hypothetical policy maker can select a restricted pool of agents to act upon based, for instance, on their time-dependent influence on the rest of the population. A finite-particle control problem is studied and its mean-field limit is identified via $Gamma$-convergence, ensuring convergence of optimal controls. The dynamics of the mean-field optimal control is governed by a continuity-type equation without diffusion. Specific applications in the context of opinion dynamics are discussed with some numerical experiments.
In recent years, the success of the Koopman operator in dynamical systems analysis has also fueled the development of Koopman operator-based control frameworks. In order to preserve the relatively low data requirements for an approximation via Dynamic Mode Decomposition, a quantization approach was recently proposed in [Peitz & Klus, Automatica 106, 2019]. This way, control of nonlinear dynamical systems can be realized by means of switched systems techniques, using only a finite set of autonomous Koopman operator-based reduced models. These individual systems can be approximated very efficiently from data. The main idea is to transform a control system into a set of autonomous systems for which the optimal switching sequence has to be computed. In this article, we extend these results to continuous control inputs using relaxation. This way, we combine the advantages of the data efficiency of approximating a finite set of autonomous systems with continuous controls. We show that when using the Koopman generator, this relaxation --- realized by linear interpolation between two operators --- does not introduce any error for control affine systems. This allows us to control high-dimensional nonlinear systems using bilinear, low-dimensional surrogate models. The efficiency of the proposed approach is demonstrated using several examples with increasing complexity, from the Duffing oscillator to the chaotic fluidic pinball.