ترغب بنشر مسار تعليمي؟ اضغط هنا

Minimum Time Learning Model Predictive Control

114   0   0.0 ( 0 )
 نشر من قبل Ugo Rosolia
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we present a Learning Model Predictive Control (LMPC) strategy for linear and nonlinear time optimal control problems. Our work builds on existing LMPC methodologies and it guarantees finite time convergence properties for the closed-loop system. We show how to construct a time varying safe set and terminal cost function using closed-loop data. The resulting LMPC policy is time varying and it guarantees recursive constraint satisfaction and non-decreasing performance. Computational efficiency is obtained by convexifing the safe set and terminal cost function. We demonstrate that, for a class of nonlinear system and convex constraints, the convex LMPC formulation guarantees recursive constraint satisfaction and non-decreasing performance. Finally, we illustrate the effectiveness of the proposed strategies on minimum time obstacle avoidance and racing examples.

قيم البحث

اقرأ أيضاً

428 - Ugo Rosolia , Xiaojing Zhang , 2019
A robust Learning Model Predictive Controller (LMPC) for uncertain systems performing iterative tasks is presented. At each iteration of the control task the closed-loop state, input and cost are stored and used in the controller design. This paper f irst illustrates how to construct robust invariant sets and safe control policies exploiting historical data. Then, we propose an iterative LMPC design procedure, where data generated by a robust controller at iteration $j$ are used to design a robust LMPC at the next $j+1$ iteration. We show that this procedure allows us to iteratively enlarge the domain of the control policy and it guarantees recursive constraints satisfaction, input to state stability and performance bounds for the certainty equivalent closed-loop system. The use of an adaptive prediction horizon is the key element of the proposed design. The effectiveness of the proposed control scheme is illustrated on a linear system subject to bounded additive disturbance.
134 - Wei-Han Chen , Fengqi You 2019
Appropriate greenhouse temperature should be maintained to ensure crop production while minimizing energy consumption. Even though weather forecasts could provide a certain amount of information to improve control performance, it is not perfect and f orecast error may cause the temperature to deviate from the acceptable range. To inherent uncertainty in weather that affects control accuracy, this paper develops a data-driven robust model predictive control (MPC) approach for greenhouse temperature control. The dynamic model is obtained from thermal resistance-capacitance modeling derived by the Building Resistance-Capacitance Modeling (BRCM) toolbox. Uncertainty sets of ambient temperature and solar radiation are captured by support vector clustering technique, and they are further tuned for better quality by training-calibration procedure. A case study that implements the carefully chosen uncertainty sets on robust model predictive control shows that the data-driven robust MPC has better control performance compared to rule-based control, certainty equivalent MPC, and robust MPC.
Accounting for more than 40% of global energy consumption, residential and commercial buildings will be key players in any future green energy systems. To fully exploit their potential while ensuring occupant comfort, a robust control scheme is requi red to handle various uncertainties, such as external weather and occupant behaviour. However, prominent patterns, especially periodicity, are widely seen in most sources of uncertainty. This paper incorporates this correlated structure into the learning model predictive control framework, in order to learn a global optimal robust control scheme for building operations.
This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.
We propose Kernel Predictive Control (KPC), a learning-based predictive control strategy that enjoys deterministic guarantees of safety. Noise-corrupted samples of the unknown system dynamics are used to learn several models through the formalism of non-parametric kernel regression. By treating each prediction step individually, we dispense with the need of propagating sets through highly non-linear maps, a procedure that often involves multiple conservative approximation steps. Finite-sample error bounds are then used to enforce state-feasibility by employing an efficient robust formulation. We then present a relaxation strategy that exploits on-line data to weaken the optimization problem constraints while preserving safety. Two numerical examples are provided to illustrate the applicability of the proposed control method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا