No Arabic abstract
In many applications, and in systems/synthetic biology, in particular, it is desirable to compute control policies that force the trajectory of a bistable system from one equilibrium (the initial point) to another equilibrium (the target point), or in other words to solve the switching problem. It was recently shown that, for monotone bistable systems, this problem admits easy-to-implement open-loop solutions in terms of temporal pulses (i.e., step functions of fixed length and fixed magnitude). In this paper, we develop this idea further and formulate a problem of convergence to an equilibrium from an arbitrary initial point. We show that this problem can be solved using a static optimization problem in the case of monotone systems. Changing the initial point to an arbitrary state allows to build closed-loop, event-based or open-loop policies for the switching/convergence problems. In our derivations we exploit the Koopman operator, which offers a linear infinite-dimensional representation of an autonomous nonlinear system. One of the main advantages of using the Koopman operator is the powerful computational tools developed for this framework. Besides the presence of numerical solutions, the switching/convergence problem can also serve as a building block for solving more complicated control problems and can potentially be applied to non-monotone systems. We illustrate this argument on the problem of synchronizing cardiac cells by defibrillation. Potentially, our approach can be extended to problems with different parametrizations of control signals since the only fundamental limitation is the finite time application of the control signal.
This paper introduces a framework for solving time-autonomous nonlinear infinite horizon optimal control problems, under the assumption that all minimizers satisfy Pontryagins necessary optimality conditions. In detail, we use methods from the field of symplectic geometry to analyze the eigenvalues of a Koopman operator that lifts Pontryagins differential equation into a suitably defined infinite dimensional symplectic space. This has the advantage that methods from the field of spectral analysis can be used to characterize globally optimal control laws. A numerical method for constructing optimal feedback laws for nonlinear systems is then obtained by computing the eigenvalues and eigenvectors of a matrix that is obtained by projecting the Pontryagin-Koopman operator onto a finite dimensional space. We illustrate the effectiveness of this approach by computing accurate approximations of the optimal nonlinear feedback law for a Van der Pol control system, which cannot be stabilized by a linear control law.
In the development of model predictive controllers for PDE-constrained problems, the use of reduced order models is essential to enable real-time applicability. Besides local linearization approaches, Proper Orthogonal Decomposition (POD) has been most widely used in the past in order to derive such models. Due to the huge advances concerning both theory as well as the numerical approximation, a very promising alternative based on the Koopman operator has recently emerged. In this chapter, we present two control strategies for model predictive control of nonlinear PDEs using data-efficient approximations of the Koopman operator. In the first one, the dynamic control system is replaced by a small number of autonomous systems with different yet constant inputs. The control problem is consequently transformed into a switching problem. In the second approach, a bilinear surrogate model, is obtained via linear interpolation between two of these autonomous systems. Using a recent convergence result for Extended Dynamic Mode Decomposition (EDMD), convergence to the true optimum can be proved. We study the properties of these two strategies with respect to solution quality, data requirements, and complexity of the resulting optimization problem using the 1D Burgers Equation and the 2D Navier-Stokes Equations as examples. Finally, an extension for online adaptivity is presented.
The Koopman operator allows for handling nonlinear systems through a (globally) linear representation. In general, the operator is infinite-dimensional - necessitating finite approximations - for which there is no overarching framework. Although there are principled ways of learning such finite approximations, they are in many instances overlooked in favor of, often ill-posed and unstructured methods. Also, Koopman operator theory has long-standing connections to known system-theoretic and dynamical system notions that are not universally recognized. Given the former and latter realities, this work aims to bridge the gap between various concepts regarding both theory and tractable realizations. Firstly, we review data-driven representations (both unstructured and structured) for Koopman operator dynamical models, categorizing various existing methodologies and highlighting their differences. Furthermore, we provide concise insight into the paradigms relation to system-theoretic notions and analyze the prospect of using the paradigm for modeling control systems. Additionally, we outline the current challenges and comment on future perspectives.
In this effort, a novel operator theoretic framework is developed for data-driven solution of optimal control problems. The developed methods focus on the use of trajectories (i.e., time-series) as the fundamental unit of data for the resolution of optimal control problems in dynamical systems. Trajectory information in the dynamical systems is embedded in a reproducing kernel Hilbert space (RKHS) through what are called occupation kernels. The occupation kernels are tied to the dynamics of the system through the densely defined Liouville operator. The pairing of Liouville operators and occupation kernels allows for lifting of nonlinear finite-dimensional optimal control problems into the space of infinite-dimensional linear programs over RKHSs.
Methods for constructing causal linear models from nonlinear dynamical systems through lifting linearization underpinned by Koopman operator and physical system modeling theory are presented. Outputs of a nonlinear control system, called observables, may be functions of state and input, $phi(x,u)$. These input-dependent observables cannot be used for lifting the system because the state equations in the augmented space contain the time derivatives of input and are therefore anticausal. Here, the mechanism of creating anticausal observables is examined, and two methods for solving the causality problem in lifting linearization are presented. The first method is to replace anticausal observables by their integral variables $phi^*$, and lift the dynamics with $phi^*$, so that the time derivative of $phi^*$ does not include the time derivative of input. The other method is to alter the original physical model by adding a small inertial element, or a small capacitive element, so that the systems causal relationship changes. These augmented dynamics alter the signal path from the input to the anticausal observable so that the observables are not dependent on inputs. Numerical simulations validate the effectiveness of the methods.