Do you want to publish a course? Click here

Recurrent Model Predictive Control

249   0   0.0 ( 0 )
 Added by Wenxuan Wang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.



rate research

Read More

This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.
We present recent results that demonstrate the power of viewing the problem of V-formation in a flock of birds as one of Model Predictive Control (MPC). The V-formation-MPC marriage can be understood in terms of the problem of synthesizing an optimal plan for a continuous-space and continuous-time Markov decision process (MDP), where the goal is to reach a target state that minimizes a given cost function. First, we consider ARES, an approximation algorithm for generating optimal plans (action sequences) that take an initial state of an MDP to a state whose cost is below a specified (convergence) threshold. ARES uses Particle Swarm Optimization, with adaptive sizing for both the receding horizon and the particle swarm. Inspired by Importance Splitting, the length of the horizon and the number of particles are chosen such that at least one particle reaches a next-level state. ARES can alternatively be viewed as a model-predictive control (MPC) algorithm that utilizes an adaptive receding horizon, aka Adaptive MPC (AMPC). We next present Distributed AMPC (DAMPC), a distributed version of AMPC that works with local neighborhoods. We introduce adaptive neighborhood resizing, whereby the neighborhood size is determined by the cost-based Lyapunov function evaluated over a global system state. Our experiments show that DAMPC can perform almost as well as centralized AMPC, while using only local information and a form of distributed consensus in each time step. Finally, inspired by security attacks on cyber-physical systems, we introduce controller-attacker games (CAG), where two players, a controller and an attacker, have antagonistic objectives. We formulate a special case of CAG called V-formation games, where the attackers goal is to prevent the controller from attaining V-formation. We demonstrate how adaptation in the design of the controller helps in overcoming certain attacks.
Model predictive control (MPC) is a method to formulate the optimal scheduling problem for grid flexibilities in a mathematical manner. The resulting time-constrained optimization problem can be re-solved in each optimization time step using classical optimization methods such as Second Order Cone Programming (SOCP) or Interior Point Methods (IPOPT). When applying MPC in a rolling horizon scheme, the impact of uncertainty in forecasts on the optimal schedule is reduced. While MPC methods promise accurate results for time-constrained grid optimization they are inherently limited by the calculation time needed for large and complex power system models. Learning the optimal control behaviour using function approximation offers the possibility to determine near-optimal control actions with short calculation time. A Neural Predictive Control (NPC) scheme is proposed to learn optimal control policies for linear and nonlinear power systems through imitation. It is demonstrated that this procedure can find near-optimal solutions, while reducing the calculation time by an order of magnitude. The learned controllers are validated using a benchmark smart grid.
187 - Wen-Hua Chen 2020
This paper presents stability analysis tools for model predictive control (MPC) with and without terminal weight. Stability analysis of MPC with a limited horizon but without terminal weight is a long-standing open problem. By using a modified value function as an Lyapunov function candidate and the principle of optimality, this paper establishes stability conditions for this type of widely spread MPC algorithms. A new stability guaranteed MPC algorithm without terminal weight (MPCS) is presented. With the help of designing a new sublevel set defined by the value function of one-step ahead stage cost, conditions for checking its recursive feasibility and stability of the proposed MPC algorithm are presented. The new stability condition and the derived MPCS overcome the difficulties arising in the existing terminal weight based MPC framework, including the need of searching a suitable terminal weight and possible poor performance caused by an inappropriate terminal weight. This work is further extended to MPC with a terminal weight for the completeness. Numerical examples are presented to demonstrate the effectiveness of the proposed tool, whereas the existing stability analysis tools are either not applicable or lead to quite conservative results. It shows that the proposed tools offer a number of mechanisms to achieve stability: adjusting state and/or control weights, extending the length of horizon, and adding a simple extra constraint on the first or second state in the optimisation.
In this paper we present a Learning Model Predictive Control (LMPC) strategy for linear and nonlinear time optimal control problems. Our work builds on existing LMPC methodologies and it guarantees finite time convergence properties for the closed-loop system. We show how to construct a time varying safe set and terminal cost function using closed-loop data. The resulting LMPC policy is time varying and it guarantees recursive constraint satisfaction and non-decreasing performance. Computational efficiency is obtained by convexifing the safe set and terminal cost function. We demonstrate that, for a class of nonlinear system and convex constraints, the convex LMPC formulation guarantees recursive constraint satisfaction and non-decreasing performance. Finally, we illustrate the effectiveness of the proposed strategies on minimum time obstacle avoidance and racing examples.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا