Do you want to publish a course? Click here

Data-Driven Synthesis of Optimization-Based Controllers for Regulation of Unknown Linear Systems

81   0   0.0 ( 0 )
 Added by Gianluca Bianchin
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

This paper proposes a data-driven framework to solve time-varying optimization problems associated with unknown linear dynamical systems. Making online control decisions to regulate a dynamical system to the solution of an optimization problem is a central goal in many modern engineering applications. Yet, the available methods critically rely on a precise knowledge of the system dynamics, thus mandating a preliminary system identification phase before a controller can be designed. In this work, we leverage results from behavioral theory to show that the steady-state transfer function of a linear system can be computed from data samples without any knowledge or estimation of the system model. We then use this data-driven representation to design a controller, inspired by a gradient-descent optimization method, that regulates the system to the solution of a convex optimization problem, without requiring any knowledge of the time-varying disturbances affecting the model equation. Results are tailored to cost functions satisfy the Polyak-L ojasiewicz inequality.



rate research

Read More

This paper proposes a data-driven control framework to regulate an unknown, stochastic linear dynamical system to the solution of a (stochastic) convex optimization problem. Despite the centrality of this problem, most of the available methods critically rely on a precise knowledge of the system dynamics (thus requiring off-line system identification and model refinement). To this aim, in this paper we first show that the steady-state transfer function of a linear system can be computed directly from control experiments, bypassing explicit model identification. Then, we leverage the estimated transfer function to design a controller -- which is inspired by stochastic gradient descent methods -- that regulates the system to the solution of the prescribed optimization problem. A distinguishing feature of our methods is that they do not require any knowledge of the system dynamics, disturbance terms, or their distributions. Our technical analysis combines concepts and tools from behavioral system theory, stochastic optimization with decision-dependent distributions, and stability analysis. We illustrate the applicability of the framework on a case study for mobility-on-demand ride service scheduling in Manhattan, NY.
This paper considers the cooperative output regulation problem for linear multi-agent systems with a directed communication graph, heterogeneous linear subsystems, and an exosystem whose output is available to only a subset of subsystems. Both the cases with nominal and uncertain linear subsystems are studied. For the case with nominal linear subsystems, a distributed adaptive observer-based controller is designed, where the distributed adaptive observer is implemented for the subsystems to estimate the exogenous signal. For the case with uncertain linear subsystems, the proposed distributed observer and the internal model principle are combined to solve the robust cooperative output regulation problem. Compared with the existing works, one main contribution of this paper is that the proposed control schemes can be designed and implemented by each subsystem in a fully distributed fashion for general directed graphs. For the special case with undirected graphs, a distributed output feedback control law is further presented.
This article treats three problems of sparse and optimal multiplexing a finite ensemble of linear control systems. Given an ensemble of linear control systems, multiplexing of the controllers consists of an algorithm that selects, at each time (t), only one from the ensemble of linear systems is actively controlled whereas the other systems evolve in open-loop. The first problem treated here is a ballistic reachability problem where the control signals are required to be maximally sparse and multiplexed, the second concerns sparse and optimally multiplexed linear quadratic control, and the third is a sparse and optimally multiplexed Mayer problem. Numerical experiments are provided to demonstrate the efficacy of the techniques developed here.
We study safe, data-driven control of (Markov) jump linear systems with unknown transition probabilities, where both the discrete mode and the continuous state are to be inferred from output measurements. To this end, we develop a receding horizon estimator which uniquely identifies a sub-sequence of past mode transitions and the corresponding continuous state, allowing for arbitrary switching behavior. Unlike traditional approaches to mode estimation, we do not require an offline exhaustive search over mode sequences to determine the size of the observation window, but rather select it online. If the system is weakly mode observable, the window size will be upper bounded, leading to a finite-memory observer. We integrate the estimation procedure with a simple distributionally robust controller, which hedges against misestimations of the transition probabilities due to finite sample sizes. As additional mode transitions are observed, the used ambiguity sets are updated, resulting in continual improvements of the control performance. The practical applicability of the approach is illustrated on small numerical examples.
We consider optimization problems for (networked) systems, where we minimize a cost that includes a known time-varying function associated with the systems outputs and an unknown function of the inputs. We focus on a data-based online projected gradient algorithm where: i) the input-output map of the system is replaced by measurements of the output whenever available (thus leading to a closed-loop setup); and ii) the unknown function is learned based on functional evaluations that may occur infrequently. Accordingly, the feedback-based online algorithm operates in a regime with inexact gradient knowledge and with random updates. We show that the online algorithm generates points that are within a bounded error from the optimal solution of the problem; in particular, we provide error bounds in expectation and in high-probability, where the latter is given when the gradient error follows a sub-Weibull distribution and when missing measurements are modeled as Bernoulli random variables. We also provide results in terms of input-to-state stability in expectation and in probability. Numerical results are presented in the context of a demand response task in power systems.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا