Do you want to publish a course? Click here

Self-triggered Model Predictive Control for Continuous-Time Systems: A Multiple Discretizations Approach

89   0   0.0 ( 0 )
 Added by Kazumune Hashimoto
 Publication date 2016
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we propose a new self-triggered formulation of Model Predictive Control for continuous-time linear networked control systems. Our control approach, which aims at reducing the number of transmitting control samples to the plant, is derived by parallelly solving optimal control problems with different sampling time intervals. The controller then picks up one sampling pattern as a transmission decision, such that a reduction of communication load and the stability will be obtained. The proposed strategy is illustrated through comparative simulation examples.



rate research

Read More

We present an algorithm for controlling and scheduling multiple linear time-invariant processes on a shared bandwidth limited communication network using adaptive sampling intervals. The controller is centralized and computes at every sampling instant not only the new control command for a process, but also decides the time interval to wait until taking the next sample. The approach relies on model predictive control ideas, where the cost function penalizes the state and control effort as well as the time interval until the next sample is taken. The latter is introduced in order to generate an adaptive sampling scheme for the overall system such that the sampling time increases as the norm of the system state goes to zero. The paper presents a method for synthesizing such a predictive controller and gives explicit sufficient conditions for when it is stabilizing. Further explicit conditions are given which guarantee conflict free transmissions on the network. It is shown that the optimization problem may be solved off-line and that the controller can be implemented as a lookup table of state feedback gains. Simulation studies which compare the proposed algorithm to periodic sampling illustrate potential performance gains.
In this paper the optimal control of alignment models composed by a large number of agents is investigated in presence of a selective action of a controller, acting in order to enhance consensus. Two types of selective controls have been presented: an homogeneous control filtered by a selective function and a distributed control active only on a selective set. As a first step toward a reduction of computational cost, we introduce a model predictive control (MPC) approximation by deriving a numerical scheme with a feedback selective constrained dynamics. Next, in order to cope with the numerical solution of a large number of interacting agents, we derive the mean-field limit of the feedback selective constrained dynamics, which eventually will be solved numerically by means of a stochastic algorithm, able to simulate efficiently the selective constrained dynamics. Finally, several numerical simulations are reported to show the efficiency of the proposed techniques.
We present an algorithm for robust model predictive control with consideration of uncertainty and safety constraints. Our framework considers a nonlinear dynamical system subject to disturbances from an unknown but bounded uncertainty set. By viewing the system as a fixed point of an operator acting over trajectories, we propose a convex condition on control actions that guarantee safety against the uncertainty set. The proposed condition guarantees that all realizations of the state trajectories satisfy safety constraints. Our algorithm solves a sequence of convex quadratic constrained optimization problems of size n*N, where n is the number of states, and N is the prediction horizon in the model predictive control problem. Compared to existing methods, our approach solves convex problems while guaranteeing that all realizations of uncertainty set do not violate safety constraints. Moreover, we consider the implicit time-discretization of system dynamics to increase the prediction horizon and enhance computational accuracy. Numerical simulations for vehicle navigation demonstrate the effectiveness of our approach.
In real-world problems, uncertainties (e.g., errors in the measurement, precision errors) often lead to poor performance of numerical algorithms when not explicitly taken into account. This is also the case for control problems, where optimal solutions can degrade in quality or even become infeasible. Thus, there is the need to design methods that can handle uncertainty. In this work, we consider nonlinear multi-objective optimal control problems with uncertainty on the initial conditions, and in particular their incorporation into a feedback loop via model predictive control (MPC). In multi-objective optimal control, an optimal compromise between multiple conflicting criteria has to be found. For such problems, not much has been reported in terms of uncertainties. To address this problem class, we design an offline/online framework to compute an approximation of efficient control strategies. This approach is closely related to explicit MPC for nonlinear systems, where the potentially expensive optimization problem is solved in an offline phase in order to enable fast solutions in the online phase. In order to reduce the numerical cost of the offline phase, we exploit symmetries in the control problems. Furthermore, in order to ensure optimality of the solutions, we include an additional online optimization step, which is considerably cheaper than the original multi-objective optimization problem. We test our framework on a car maneuvering problem where safety and speed are the objectives. The multi-objective framework allows for online adaptations of the desired objective. Alternatively, an automatic scalarizing procedure yields very efficient feedback controls. Our results show that the method is capable of designing driving strategies that deal better with uncertainties in the initial conditions, which translates into potentially safer and faster driving strategies.
Move blocking (MB) is a widely used strategy to reduce the degrees of freedom of the Optimal Control Problem (OCP) arising in receding horizon control. The size of the OCP is reduced by forcing the input variables to be constant over multiple discretization steps. In this paper, we focus on developing computationally efficient MB schemes for multiple shooting based nonlinear model predictive control (NMPC). The degrees of freedom of the OCP is reduced by introducing MB in the shooting step, resulting in a smaller but sparse OCP. Therefore, the discretization accuracy and level of sparsity is maintained. A condensing algorithm that exploits the sparsity structure of the OCP is proposed, that allows to reduce the computation complexity of condensing from quadratic to linear in the number of discretization nodes. As a result, active-set methods with warm-start strategy can be efficiently employed, thus allowing the use of a longer prediction horizon. A detailed comparison between the proposed scheme and the nonuniform grid NMPC is given. Effectiveness of the algorithm in reducing computational burden while maintaining optimization accuracy and constraints fulfillment is shown by means of simulations with two different problems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا