No Arabic abstract
As in almost every other branch of science, the major advances in data science and machine learning have also resulted in significant improvements regarding the modeling and simulation of nonlinear dynamical systems. It is nowadays possible to make accurate medium to long-term predictions of highly complex systems such as the weather, the dynamics within a nuclear fusion reactor, of disease models or the stock market in a very efficient manner. In many cases, predictive methods are advertised to ultimately be useful for control, as the control of high-dimensional nonlinear systems is an engineering grand challenge with huge potential in areas such as clean and efficient energy production, or the development of advanced medical devices. However, the question of how to use a predictive model for control is often left unanswered due to the associated challenges, namely a significantly higher system complexity, the requirement of much larger data sets and an increased and often problem-specific modeling effort. To solve these issues, we present a universal framework (which we call QuaSiModO: Quantization-Simulation-Modeling-Optimization) to transform arbitrary predictive models into control systems and use them for feedback control. The advantages of our approach are a linear increase in data requirements with respect to the control dimension, performance guarantees that rely exclusively on the accuracy of the predictive model, and only little prior knowledge requirements in control theory to solve complex control problems. In particular the latter point is of key importance to enable a large number of researchers and practitioners to exploit the ever increasing capabilities of predictive models for control in a straight-forward and systematic fashion.
This manuscript presents an algorithm for obtaining an approximation of nonlinear high order control affine dynamical systems, that leverages the controlled trajectories as the central unit of information. As the fundamental basis elements leveraged in approximation, higher order control occupation kernels represent iterated integration after multiplication by a given controller in a vector valued reproducing kernel Hilbert space. In a regularized regression setting, the unique optimizer for a particular optimization problem is expressed as a linear combination of these occupation kernels, which converts an infinite dimensional optimization problem to a finite dimensional optimization problem through the representer theorem. Interestingly, the vector valued structure of the Hilbert space allows for simultaneous approximation of the drift and control effectiveness components of the control affine system. Several experiments are performed to demonstrate the effectiveness of the approach.
Linear time-varying (LTV) systems are widely used for modeling real-world dynamical systems due to their generality and simplicity. Providing stability guarantees for LTV systems is one of the central problems in control theory. However, existing approaches that guarantee stability typically lead to significantly sub-optimal cumulative control cost in online settings where only current or short-term system information is available. In this work, we propose an efficient online control algorithm, COvariance Constrained Online Linear Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a large class of LTV systems while also minimizing the control cost. The proposed method incorporates a state covariance constraint into the semi-definite programming (SDP) formulation of the LQ optimal controller. We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.
The widespread adoption of nonlinear Receding Horizon Control (RHC) strategies by industry has led to more than 30 years of intense research efforts to provide stability guarantees for these methods. However, current theoretical guarantees require that each (generally nonconvex) planning problem can be solved to (approximate) global optimality, which is an unrealistic requirement for the derivative-based local optimization methods generally used in practical implementations of RHC. This paper takes the first step towards understanding stability guarantees for nonlinear RHC when the inner planning problem is solved to first-order stationary points, but not necessarily global optima. Special attention is given to feedback linearizable systems, and a mixture of positive and negative results are provided. We establish that, under certain strong conditions, first-order solutions to RHC exponentially stabilize linearizable systems. Crucially, this guarantee requires that state costs applied to the planning problems are in a certain sense `compatible with the global geometry of the system, and a simple counter-example demonstrates the necessity of this condition. These results highlight the need to rethink the role of global geometry in the context of optimization-based control.
We study safe, data-driven control of (Markov) jump linear systems with unknown transition probabilities, where both the discrete mode and the continuous state are to be inferred from output measurements. To this end, we develop a receding horizon estimator which uniquely identifies a sub-sequence of past mode transitions and the corresponding continuous state, allowing for arbitrary switching behavior. Unlike traditional approaches to mode estimation, we do not require an offline exhaustive search over mode sequences to determine the size of the observation window, but rather select it online. If the system is weakly mode observable, the window size will be upper bounded, leading to a finite-memory observer. We integrate the estimation procedure with a simple distributionally robust controller, which hedges against misestimations of the transition probabilities due to finite sample sizes. As additional mode transitions are observed, the used ambiguity sets are updated, resulting in continual improvements of the control performance. The practical applicability of the approach is illustrated on small numerical examples.
The use of persistently exciting data has recently been popularized in the context of data-driven analysis and control. Such data have been used to assess system theoretic properties and to construct control laws, without using a system model. Persistency of excitation is a strong condition that also allows unique identification of the underlying dynamical system from the data within a given model class. In this paper, we develop a new framework in order to work with data that are not necessarily persistently exciting. Within this framework, we investigate necessary and sufficient conditions on the informativity of data for several data-driven analysis and control problems. For certain analysis and design problems, our results reveal that persistency of excitation is not necessary. In fact, in these cases data-driven analysis/control is possible while the combination of (unique) system identification and model-based control is not. For certain other control problems, our results justify the use of persistently exciting data as data-driven control is possible only with data that are informative for system identification.