No Arabic abstract
We present an approach for accelerating nonlinear model predictive control. If the current optimal input signal is saturated, also the optimal signals in subsequent time steps often are. We propose to use the open-loop optimal input signals whenever the first and some subsequent input signals are saturated. We only solve the next optimal control problem, when a non-saturated signal is encountered, or the end of the horizon is reached. In this way, we can save a significant number of NLPs to be solved while on the other hand keep the performance loss small. Furthermore, the NMPC is reactivated in time when it comes to controlling the system safely to its reference.
We present an algorithm for robust model predictive control with consideration of uncertainty and safety constraints. Our framework considers a nonlinear dynamical system subject to disturbances from an unknown but bounded uncertainty set. By viewing the system as a fixed point of an operator acting over trajectories, we propose a convex condition on control actions that guarantee safety against the uncertainty set. The proposed condition guarantees that all realizations of the state trajectories satisfy safety constraints. Our algorithm solves a sequence of convex quadratic constrained optimization problems of size n*N, where n is the number of states, and N is the prediction horizon in the model predictive control problem. Compared to existing methods, our approach solves convex problems while guaranteeing that all realizations of uncertainty set do not violate safety constraints. Moreover, we consider the implicit time-discretization of system dynamics to increase the prediction horizon and enhance computational accuracy. Numerical simulations for vehicle navigation demonstrate the effectiveness of our approach.
In real-world problems, uncertainties (e.g., errors in the measurement, precision errors) often lead to poor performance of numerical algorithms when not explicitly taken into account. This is also the case for control problems, where optimal solutions can degrade in quality or even become infeasible. Thus, there is the need to design methods that can handle uncertainty. In this work, we consider nonlinear multi-objective optimal control problems with uncertainty on the initial conditions, and in particular their incorporation into a feedback loop via model predictive control (MPC). In multi-objective optimal control, an optimal compromise between multiple conflicting criteria has to be found. For such problems, not much has been reported in terms of uncertainties. To address this problem class, we design an offline/online framework to compute an approximation of efficient control strategies. This approach is closely related to explicit MPC for nonlinear systems, where the potentially expensive optimization problem is solved in an offline phase in order to enable fast solutions in the online phase. In order to reduce the numerical cost of the offline phase, we exploit symmetries in the control problems. Furthermore, in order to ensure optimality of the solutions, we include an additional online optimization step, which is considerably cheaper than the original multi-objective optimization problem. We test our framework on a car maneuvering problem where safety and speed are the objectives. The multi-objective framework allows for online adaptations of the desired objective. Alternatively, an automatic scalarizing procedure yields very efficient feedback controls. Our results show that the method is capable of designing driving strategies that deal better with uncertainties in the initial conditions, which translates into potentially safer and faster driving strategies.
Move blocking (MB) is a widely used strategy to reduce the degrees of freedom of the Optimal Control Problem (OCP) arising in receding horizon control. The size of the OCP is reduced by forcing the input variables to be constant over multiple discretization steps. In this paper, we focus on developing computationally efficient MB schemes for multiple shooting based nonlinear model predictive control (NMPC). The degrees of freedom of the OCP is reduced by introducing MB in the shooting step, resulting in a smaller but sparse OCP. Therefore, the discretization accuracy and level of sparsity is maintained. A condensing algorithm that exploits the sparsity structure of the OCP is proposed, that allows to reduce the computation complexity of condensing from quadratic to linear in the number of discretization nodes. As a result, active-set methods with warm-start strategy can be efficiently employed, thus allowing the use of a longer prediction horizon. A detailed comparison between the proposed scheme and the nonuniform grid NMPC is given. Effectiveness of the algorithm in reducing computational burden while maintaining optimization accuracy and constraints fulfillment is shown by means of simulations with two different problems.
Stochastic uncertainties in complex dynamical systems lead to variability of system states, which can in turn degrade the closed-loop performance. This paper presents a stochastic model predictive control approach for a class of nonlinear systems with unbounded stochastic uncertainties. The control approach aims to shape probability density function of the stochastic states, while satisfying input and joint state chance constraints. Closed-loop stability is ensured by designing a stability constraint in terms of a stochastic control Lyapunov function, which explicitly characterizes stability in a probabilistic sense. The Fokker-Planck equation is used for describing the dynamic evolution of the states probability density functions. Complete characterization of probability density functions using the Fokker-Planck equation allows for shaping the states density functions as well as direct computation of joint state chance constraints. The closed-loop performance of the stochastic control approach is demonstrated using a continuous stirred-tank reactor.
In this paper the optimal control of alignment models composed by a large number of agents is investigated in presence of a selective action of a controller, acting in order to enhance consensus. Two types of selective controls have been presented: an homogeneous control filtered by a selective function and a distributed control active only on a selective set. As a first step toward a reduction of computational cost, we introduce a model predictive control (MPC) approximation by deriving a numerical scheme with a feedback selective constrained dynamics. Next, in order to cope with the numerical solution of a large number of interacting agents, we derive the mean-field limit of the feedback selective constrained dynamics, which eventually will be solved numerically by means of a stochastic algorithm, able to simulate efficiently the selective constrained dynamics. Finally, several numerical simulations are reported to show the efficiency of the proposed techniques.