Do you want to publish a course? Click here

KPC: Learning-Based Model Predictive Control with Deterministic Guarantees

259   0   0.0 ( 0 )
 Added by Emilio Maddalena
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We propose Kernel Predictive Control (KPC), a learning-based predictive control strategy that enjoys deterministic guarantees of safety. Noise-corrupted samples of the unknown system dynamics are used to learn several models through the formalism of non-parametric kernel regression. By treating each prediction step individually, we dispense with the need of propagating sets through highly non-linear maps, a procedure that often involves multiple conservative approximation steps. Finite-sample error bounds are then used to enforce state-feasibility by employing an efficient robust formulation. We then present a relaxation strategy that exploits on-line data to weaken the optimization problem constraints while preserving safety. Two numerical examples are provided to illustrate the applicability of the proposed control method.



rate research

Read More

We propose a learning-based, distributionally robust model predictive control approach towards the design of adaptive cruise control (ACC) systems. We model the preceding vehicle as an autonomous stochastic system, using a hybrid model with continuous dynamics and discrete, Markovian inputs. We estimate the (unknown) transition probabilities of this model empirically using observed mode transitions and simultaneously determine sets of probability vectors (ambiguity sets) around these estimates, that contain the true transition probabilities with high confidence. We then solve a risk-averse optimal control problem that assumes the worst-case distributions in these sets. We furthermore derive a robust terminal constraint set and use it to establish recursive feasibility of the resulting MPC scheme. We validate the theoretical results and demonstrate desirable properties of the scheme through closed-loop simulations.
In this paper we present a Learning Model Predictive Control (LMPC) strategy for linear and nonlinear time optimal control problems. Our work builds on existing LMPC methodologies and it guarantees finite time convergence properties for the closed-loop system. We show how to construct a time varying safe set and terminal cost function using closed-loop data. The resulting LMPC policy is time varying and it guarantees recursive constraint satisfaction and non-decreasing performance. Computational efficiency is obtained by convexifing the safe set and terminal cost function. We demonstrate that, for a class of nonlinear system and convex constraints, the convex LMPC formulation guarantees recursive constraint satisfaction and non-decreasing performance. Finally, we illustrate the effectiveness of the proposed strategies on minimum time obstacle avoidance and racing examples.
Accounting for more than 40% of global energy consumption, residential and commercial buildings will be key players in any future green energy systems. To fully exploit their potential while ensuring occupant comfort, a robust control scheme is required to handle various uncertainties, such as external weather and occupant behaviour. However, prominent patterns, especially periodicity, are widely seen in most sources of uncertainty. This paper incorporates this correlated structure into the learning model predictive control framework, in order to learn a global optimal robust control scheme for building operations.
We introduce a general framework for robust data-enabled predictive control (DeePC) for linear time-invariant (LTI) systems. The proposed framework enables us to obtain model-free optimal control for LTI systems based on noisy input/output data. More specifically, robust DeePC solves a min-max optimization problem to compute the optimal control sequence that is resilient to all possible realizations of the uncertainties in the input/output data within a prescribed uncertainty set. We present computationally tractable reformulations of the min-max problem with various uncertainty sets. Furthermore, we show that even though an accurate prediction of the future behavior is unattainable in practice due to inaccessibility of the perfect input/output data, the obtained robust optimal control sequence provides performance guarantees for the actually realized input/output cost. We further show that the robust DeePC generalizes and robustifies the regularized DeePC (with quadratic regularization or 1-norm regularization) proposed in the literature. Finally, we demonstrate the performance of the proposed robust DeePC algorithm on high-fidelity, nonlinear, and noisy simulations of a grid-connected power converter system.
This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا