No Arabic abstract
We consider the problem of stabilization of a linear system, under state and control constraints, and subject to bounded disturbances and unknown parameters in the state matrix. First, using a simple least square solution and available noisy measurements, the set of admissible values for parameters is evaluated. Second, for the estimated set of parameter values and the corresponding linear interval model of the system, two interval predictors are recalled and an unconstrained stabilizing control is designed that uses the predicted intervals. Third, to guarantee the robust constraint satisfaction, a model predictive control algorithm is developed, which is based on solution of an optimization problem posed for the interval predictor. The conditions for recursive feasibility and asymptotic performance are established. Efficiency of the proposed control framework is illustrated by numeric simulations.
A robust Learning Model Predictive Controller (LMPC) for uncertain systems performing iterative tasks is presented. At each iteration of the control task the closed-loop state, input and cost are stored and used in the controller design. This paper first illustrates how to construct robust invariant sets and safe control policies exploiting historical data. Then, we propose an iterative LMPC design procedure, where data generated by a robust controller at iteration $j$ are used to design a robust LMPC at the next $j+1$ iteration. We show that this procedure allows us to iteratively enlarge the domain of the control policy and it guarantees recursive constraints satisfaction, input to state stability and performance bounds for the certainty equivalent closed-loop system. The use of an adaptive prediction horizon is the key element of the proposed design. The effectiveness of the proposed control scheme is illustrated on a linear system subject to bounded additive disturbance.
Accounting for more than 40% of global energy consumption, residential and commercial buildings will be key players in any future green energy systems. To fully exploit their potential while ensuring occupant comfort, a robust control scheme is required to handle various uncertainties, such as external weather and occupant behaviour. However, prominent patterns, especially periodicity, are widely seen in most sources of uncertainty. This paper incorporates this correlated structure into the learning model predictive control framework, in order to learn a global optimal robust control scheme for building operations.
This paper studies the robust satisfiability check and online control synthesis problems for uncertain discrete-time systems subject to signal temporal logic (STL) specifications. Different from existing techniques, this work proposes an approach based on STL, reachability analysis, and temporal logic trees. Firstly, a real-time version of STL semantics and a tube-based temporal logic tree are proposed. We show that such a tree can be constructed from every STL formula. Secondly, using the tube-based temporal logic tree, a sufficient condition is obtained for the robust satisfiability check of the uncertain system. When the underlying system is deterministic, a necessary and sufficient condition for satisfiability is obtained. Thirdly, an online control synthesis algorithm is designed. It is shown that when the STL formula is robustly satisfiable and the initial state of the system belongs to the initial root node of the tube-based temporal logic tree, it is guaranteed that the trajectory generated by the controller satisfies the STL formula. The effectiveness of the proposed approach is verified by an automated car overtaking example.
In this paper, we present an iterative Model Predictive Control (MPC) design for piecewise nonlinear systems. We consider finite time control tasks where the goal of the controller is to steer the system from a starting configuration to a goal state while minimizing a cost function. First, we present an algorithm that leverages a feasible trajectory that completes the task to construct a control policy which guarantees that state and input constraints are recursively satisfied and that the closed-loop system reaches the goal state in finite time. Utilizing this construction, we present a policy iteration scheme that iteratively generates safe trajectories which have non-decreasing performance. Finally, we test the proposed strategy on a discretized Spring Loaded Inverted Pendulum (SLIP) model with massless legs. We show that our methodology is robust to changes in initial conditions and disturbances acting on the system. Furthermore, we demonstrate the effectiveness of our policy iteration algorithm in a minimum time control task.
The need for robust control laws is especially important in safety-critical applications. We propose robust hybrid control barrier functions as a means to synthesize control laws that ensure robust safety. Based on this notion, we formulate an optimization problem for learning robust hybrid control barrier functions from data. We identify sufficient conditions on the data such that feasibility of the optimization problem ensures correctness of the learned robust hybrid control barrier functions. Our techniques allow us to safely expand the region of attraction of a compass gait walker that is subject to model uncertainty.