No Arabic abstract
In this paper we present a multi-rate control architecture for safety critical systems. We consider a high level planner and a low level controller which operate at different frequencies. This multi-rate behavior is described by a piecewise nonlinear model which evolves on a continuous and a discrete level. First, we present sufficient conditions which guarantee recursive constraint satisfaction for the closed-loop system. Afterwards, we propose a control design methodology which leverages Control Barrier Functions (CBFs) for low level control and Model Predictive Control (MPC) policies for high level planning. The control barrier function is designed using the full nonlinear dynamical model and the MPC is based on a simplified planning model. When the nonlinear system is control affine and the high level planning model is linear, the control actions are computed by solving convex optimization problems at each level of the hierarchy. Finally, we show the effectiveness of the proposed strategy on a simulation example, where the low level control action is updated at a higher frequency than the high level command.
In this paper, we introduce the notion of periodic safety, which requires that the system trajectories periodically visit a subset of a forward-invariant safe set, and utilize it in a multi-rate framework where a high-level planner generates a reference trajectory that is tracked by a low-level controller under input constraints. We introduce the notion of fixed-time barrier functions which is leveraged by the proposed low-level controller in a quadratic programming framework. Then, we design a model predictive control policy for high-level planning with a bound on the rate of change for the reference trajectory to guarantee that periodic safety is achieved. We demonstrate the effectiveness of the proposed strategy on a simulation example, where the proposed fixed-time stabilizing low-level controller shows successful satisfaction of control objectives, whereas an exponentially stabilizing low-level controller fails.
This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.
Accounting for more than 40% of global energy consumption, residential and commercial buildings will be key players in any future green energy systems. To fully exploit their potential while ensuring occupant comfort, a robust control scheme is required to handle various uncertainties, such as external weather and occupant behaviour. However, prominent patterns, especially periodicity, are widely seen in most sources of uncertainty. This paper incorporates this correlated structure into the learning model predictive control framework, in order to learn a global optimal robust control scheme for building operations.
Control barrier functions have shown great success in addressing control problems with safety guarantees. These methods usually find the next safe control input by solving an online quadratic programming problem. However, model uncertainty is a big challenge in synthesizing controllers. This may lead to the generation of unsafe control actions, resulting in severe consequences. In this paper, we develop a learning framework to deal with system uncertainty. Our method mainly focuses on learning the dynamics of the control barrier function, especially for high relative degree with respect to a system. We show that for each order, the time derivative of the control barrier function can be separated into the time derivative of the nominal control barrier function and a remainder. This implies that we can use a neural network to learn the remainder so that we can approximate the dynamics of the real control barrier function. We show by simulation that our method can generate safe trajectories under parametric uncertainty using a differential drive robot model.
In this paper we present a Learning Model Predictive Control (LMPC) strategy for linear and nonlinear time optimal control problems. Our work builds on existing LMPC methodologies and it guarantees finite time convergence properties for the closed-loop system. We show how to construct a time varying safe set and terminal cost function using closed-loop data. The resulting LMPC policy is time varying and it guarantees recursive constraint satisfaction and non-decreasing performance. Computational efficiency is obtained by convexifing the safe set and terminal cost function. We demonstrate that, for a class of nonlinear system and convex constraints, the convex LMPC formulation guarantees recursive constraint satisfaction and non-decreasing performance. Finally, we illustrate the effectiveness of the proposed strategies on minimum time obstacle avoidance and racing examples.