Do you want to publish a course? Click here

Perturbation-based Regret Analysis of Predictive Control in Linear Time Varying Systems

95   0   0.0 ( 0 )
 Added by Yiheng Lin
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We study predictive control in a setting where the dynamics are time-varying and linear, and the costs are time-varying and well-conditioned. At each time step, the controller receives the exact predictions of costs, dynamics, and disturbances for the future $k$ time steps. We show that when the prediction window $k$ is sufficiently large, predictive control is input-to-state stable and achieves a dynamic regret of $O(lambda^k T)$, where $lambda < 1$ is a positive constant. This is the first dynamic regret bound on the predictive control of linear time-varying systems. Under more assumptions on the terminal costs, we also show that predictive control obtains the first competitive bound for the control of linear time-varying systems: $1 + O(lambda^k)$. Our results are derived using a novel proof framework based on a perturbation bound that characterizes how a small change to the system parameters impacts the optimal trajectory.

rate research

Read More

We present a method to over-approximate reachable tubes over compact time-intervals, for linear continuous-time, time-varying control systems whose initial states and inputs are subject to compact convex uncertainty. The method uses numerical approximations of transition matrices, is convergent of first order, and assumes the ability to compute with compact convex sets in finite dimension. We also present a variant that applies to the case of zonotopic uncertainties, uses only linear algebraic operations, and yields zonotopic over-approximations. The performance of the latter variant is demonstrated on an example.
114 - Christoph Mark , Steven Liu 2021
In this paper, we propose a chance constrained stochastic model predictive control scheme for reference tracking of distributed linear time-invariant systems with additive stochastic uncertainty. The chance constraints are reformulated analytically based on mean-variance information, where we design suitable Probabilistic Reachable Sets for constraint tightening. Furthermore, the chance constraints are proven to be satisfied in closed-loop operation. The design of an invariant set for tracking complements the controller and ensures convergence to arbitrary admissible reference points, while a conditional initialization scheme provides the fundamental property of recursive feasibility. The paper closes with a numerical example, highlighting the convergence to changing output references and empirical constraint satisfaction.
Linear time-varying (LTV) systems are widely used for modeling real-world dynamical systems due to their generality and simplicity. Providing stability guarantees for LTV systems is one of the central problems in control theory. However, existing approaches that guarantee stability typically lead to significantly sub-optimal cumulative control cost in online settings where only current or short-term system information is available. In this work, we propose an efficient online control algorithm, COvariance Constrained Online Linear Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a large class of LTV systems while also minimizing the control cost. The proposed method incorporates a state covariance constraint into the semi-definite programming (SDP) formulation of the LQ optimal controller. We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.
This paper considers a time-varying optimization problem associated with a network of systems, with each of the systems shared by (and affecting) a number of individuals. The objective is to minimize cost functions associated with the individuals preferences, which are unknown, subject to time-varying constraints that capture physical or operational limits of the network. To this end, the paper develops a distributed online optimization algorithm with concurrent learning of the cost functions. The cost functions are learned on-the-fly based on the users feedback (provided at irregular intervals) by leveraging tools from shape-constrained Gaussian Processes. The online algorithm is based on a primal-dual method, and acts effectively in a closed-loop fashion where: i) users feedback is utilized to estimate the cost, and ii) measurements from the network are utilized in the algorithmic steps to bypass the need for sensing of (unknown) exogenous inputs of the network. The performance of the algorithm is analyzed in terms of dynamic network regret and constraint violation. Numerical examples are presented in the context of real-time optimization of distributed energy resources.
We present a data-driven model predictive control scheme for chance-constrained Markovian switching systems with unknown switching probabilities. Using samples of the underlying Markov chain, ambiguity sets of transition probabilities are estimated which include the true conditional probability distributions with high probability. These sets are updated online and used to formulate a time-varying, risk-averse optimal control problem. We prove recursive feasibility of the resulting MPC scheme and show that the original chance constraints remain satisfied at every time step. Furthermore, we show that under sufficient decrease of the confidence levels, the resulting MPC scheme renders the closed-loop system mean-square stable with respect to the true-but-unknown distributions, while remaining less conservative than a fully robust approach.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا