No Arabic abstract
In this paper, the classical problem of tracking and regulation is studied in a data-driven context. The endosystem is assumed to be an unknown system that is interconnected to a known exosystem that generates disturbances and reference signals. The problem is to design a regulator so that the output of the (unknown) endosystem tracks the reference signal, regardless of its initial state and the incoming disturbances. In order to do this, we assume that we have a set of input-state data on a finite time-interval. We introduce the notion of data informativity for regulator design, and establish necessary and sufficient conditions for a given set of data to be informative. Also, formulas for suitable regulators are given in terms of the data. Our results are illustrated by means of two extended examples.
The use of persistently exciting data has recently been popularized in the context of data-driven analysis and control. Such data have been used to assess system theoretic properties and to construct control laws, without using a system model. Persistency of excitation is a strong condition that also allows unique identification of the underlying dynamical system from the data within a given model class. In this paper, we develop a new framework in order to work with data that are not necessarily persistently exciting. Within this framework, we investigate necessary and sufficient conditions on the informativity of data for several data-driven analysis and control problems. For certain analysis and design problems, our results reveal that persistency of excitation is not necessary. In fact, in these cases data-driven analysis/control is possible while the combination of (unique) system identification and model-based control is not. For certain other control problems, our results justify the use of persistently exciting data as data-driven control is possible only with data that are informative for system identification.
This paper presents an approach to target tracking that is based on a variable-gain integrator and the Newton-Raphson method for finding zeros of a function. Its underscoring idea is the determination of the feedback law by measurements of the systems output and estimation of its future state via lookahead simulation. The resulting feedback law is generally nonlinear. We first apply the proposed approach to tracking a constant reference by the output of nonlinear memoryless plants. Then we extend it in a number of directions, including the tracking of time-varying reference signals by dynamic, possibly unstable systems. The approach is new hence its analysis is preliminary, and theoretical results are derived for nonlinear memoryless plants and linear dynamic plants. However, the setting for the controller does not require the plant-system to be either linear or stable, and this is verified by simulation of an inverted pendulum tracking a time-varying signal. We also demonstrate results of laboratory experiments of controlling a platoon of mobile robots.
This paper proposes a data-driven framework to solve time-varying optimization problems associated with unknown linear dynamical systems. Making online control decisions to regulate a dynamical system to the solution of an optimization problem is a central goal in many modern engineering applications. Yet, the available methods critically rely on a precise knowledge of the system dynamics, thus mandating a preliminary system identification phase before a controller can be designed. In this work, we leverage results from behavioral theory to show that the steady-state transfer function of a linear system can be computed from data samples without any knowledge or estimation of the system model. We then use this data-driven representation to design a controller, inspired by a gradient-descent optimization method, that regulates the system to the solution of a convex optimization problem, without requiring any knowledge of the time-varying disturbances affecting the model equation. Results are tailored to cost functions satisfy the Polyak-L ojasiewicz inequality.
Stochastic model predictive control (SMPC) has been a promising solution to complex control problems under uncertain disturbances. However, traditional SMPC approaches either require exact knowledge of probabilistic distributions, or rely on massive scenarios that are generated to represent uncertainties. In this paper, a novel scenario-based SMPC approach is proposed by actively learning a data-driven uncertainty set from available data with machine learning techniques. A systematical procedure is then proposed to further calibrate the uncertainty set, which gives appropriate probabilistic guarantee. The resulting data-driven uncertainty set is more compact than traditional norm-based sets, and can help reducing conservatism of control actions. Meanwhile, the proposed method requires less data samples than traditional scenario-based SMPC approaches, thereby enhancing the practicability of SMPC. Finally the optimal control problem is cast as a single-stage robust optimization problem, which can be solved efficiently by deriving the robust counterpart problem. The feasibility and stability issue is also discussed in detail. The efficacy of the proposed approach is demonstrated through a two-mass-spring system and a building energy control problem under uncertain disturbances.
In this paper, we propose an optimization-based sparse learning approach to identify the set of most influential reactions in a chemical reaction network. This reduced set of reactions is then employed to construct a reduced chemical reaction mechanism, which is relevant to chemical interaction network modeling. The problem of identifying influential reactions is first formulated as a mixed-integer quadratic program, and then a relaxation method is leveraged to reduce the computational complexity of our approach. Qualitative and quantitative validation of the sparse encoding approach demonstrates that the model captures important network structural properties with moderate computational load.