Do you want to publish a course? Click here

A data-driven robust optimization approach to scenario-based stochastic model predictive control

155   0   0.0 ( 0 )
 Added by Chao Shang
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Stochastic model predictive control (SMPC) has been a promising solution to complex control problems under uncertain disturbances. However, traditional SMPC approaches either require exact knowledge of probabilistic distributions, or rely on massive scenarios that are generated to represent uncertainties. In this paper, a novel scenario-based SMPC approach is proposed by actively learning a data-driven uncertainty set from available data with machine learning techniques. A systematical procedure is then proposed to further calibrate the uncertainty set, which gives appropriate probabilistic guarantee. The resulting data-driven uncertainty set is more compact than traditional norm-based sets, and can help reducing conservatism of control actions. Meanwhile, the proposed method requires less data samples than traditional scenario-based SMPC approaches, thereby enhancing the practicability of SMPC. Finally the optimal control problem is cast as a single-stage robust optimization problem, which can be solved efficiently by deriving the robust counterpart problem. The feasibility and stability issue is also discussed in detail. The efficacy of the proposed approach is demonstrated through a two-mass-spring system and a building energy control problem under uncertain disturbances.



rate research

Read More

170 - Feiran Zhao , Keyou You 2020
Optimal control of a stochastic dynamical system usually requires a good dynamical model with probability distributions, which is difficult to obtain due to limited measurements and/or complicated dynamics. To solve it, this work proposes a data-driven distributionally robust control framework with the Wasserstein metric via a constrained two-player zero-sum Markov game, where the adversarial player selects the probability distribution from a Wasserstein ball centered at an empirical distribution. Then, the game is approached by its penalized version, an optimal stabilizing solution of which is derived explicitly in a linear structure under the Riccati-type iterations. Moreover, we design a model-free Q-learning algorithm with global convergence to learn the optimal controller. Finally, we verify the effectiveness of the proposed learning algorithm and demonstrate its robustness to the probability distribution errors via numerical examples.
We present a data-driven model predictive control scheme for chance-constrained Markovian switching systems with unknown switching probabilities. Using samples of the underlying Markov chain, ambiguity sets of transition probabilities are estimated which include the true conditional probability distributions with high probability. These sets are updated online and used to formulate a time-varying, risk-averse optimal control problem. We prove recursive feasibility of the resulting MPC scheme and show that the original chance constraints remain satisfied at every time step. Furthermore, we show that under sufficient decrease of the confidence levels, the resulting MPC scheme renders the closed-loop system mean-square stable with respect to the true-but-unknown distributions, while remaining less conservative than a fully robust approach.
134 - Wei-Han Chen , Fengqi You 2019
Appropriate greenhouse temperature should be maintained to ensure crop production while minimizing energy consumption. Even though weather forecasts could provide a certain amount of information to improve control performance, it is not perfect and forecast error may cause the temperature to deviate from the acceptable range. To inherent uncertainty in weather that affects control accuracy, this paper develops a data-driven robust model predictive control (MPC) approach for greenhouse temperature control. The dynamic model is obtained from thermal resistance-capacitance modeling derived by the Building Resistance-Capacitance Modeling (BRCM) toolbox. Uncertainty sets of ambient temperature and solar radiation are captured by support vector clustering technique, and they are further tuned for better quality by training-calibration procedure. A case study that implements the carefully chosen uncertainty sets on robust model predictive control shows that the data-driven robust MPC has better control performance compared to rule-based control, certainty equivalent MPC, and robust MPC.
114 - Christoph Mark , Steven Liu 2021
In this paper, we propose a chance constrained stochastic model predictive control scheme for reference tracking of distributed linear time-invariant systems with additive stochastic uncertainty. The chance constraints are reformulated analytically based on mean-variance information, where we design suitable Probabilistic Reachable Sets for constraint tightening. Furthermore, the chance constraints are proven to be satisfied in closed-loop operation. The design of an invariant set for tracking complements the controller and ensures convergence to arbitrary admissible reference points, while a conditional initialization scheme provides the fundamental property of recursive feasibility. The paper closes with a numerical example, highlighting the convergence to changing output references and empirical constraint satisfaction.
We study safe, data-driven control of (Markov) jump linear systems with unknown transition probabilities, where both the discrete mode and the continuous state are to be inferred from output measurements. To this end, we develop a receding horizon estimator which uniquely identifies a sub-sequence of past mode transitions and the corresponding continuous state, allowing for arbitrary switching behavior. Unlike traditional approaches to mode estimation, we do not require an offline exhaustive search over mode sequences to determine the size of the observation window, but rather select it online. If the system is weakly mode observable, the window size will be upper bounded, leading to a finite-memory observer. We integrate the estimation procedure with a simple distributionally robust controller, which hedges against misestimations of the transition probabilities due to finite sample sizes. As additional mode transitions are observed, the used ambiguity sets are updated, resulting in continual improvements of the control performance. The practical applicability of the approach is illustrated on small numerical examples.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا