ترغب بنشر مسار تعليمي؟ اضغط هنا

Stability for Receding-horizon Stochastic Model Predictive Control

153   0   0.0 ( 0 )
 نشر من قبل Ali Mesbah
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A stochastic model predictive control (SMPC) approach is presented for discrete-time linear systems with arbitrary time-invariant probabilistic uncertainties and additive Gaussian process noise. Closed-loop stability of the SMPC approach is established by appropriate selection of the cost function. Polynomial chaos is used for uncertainty propagation through system dynamics. The performance of the SMPC approach is demonstrated using the Van de Vusse reactions.



قيم البحث

اقرأ أيضاً

We present an algorithm for controlling and scheduling multiple linear time-invariant processes on a shared bandwidth limited communication network using adaptive sampling intervals. The controller is centralized and computes at every sampling instan t not only the new control command for a process, but also decides the time interval to wait until taking the next sample. The approach relies on model predictive control ideas, where the cost function penalizes the state and control effort as well as the time interval until the next sample is taken. The latter is introduced in order to generate an adaptive sampling scheme for the overall system such that the sampling time increases as the norm of the system state goes to zero. The paper presents a method for synthesizing such a predictive controller and gives explicit sufficient conditions for when it is stabilizing. Further explicit conditions are given which guarantee conflict free transmissions on the network. It is shown that the optimization problem may be solved off-line and that the controller can be implemented as a lookup table of state feedback gains. Simulation studies which compare the proposed algorithm to periodic sampling illustrate potential performance gains.
This article considers the stochastic optimal control of discrete-time linear systems subject to (possibly) unbounded stochastic disturbances, hard constraints on the manipulated variables, and joint chance constraints on the states. A tractable conv ex second-order cone program (SOCP) is derived for calculating the receding-horizon control law at each time step. Feedback is incorporated during prediction by parametrizing the control law as an affine function of the disturbances. Hard input constraints are guaranteed by saturating the disturbances that appear in the control law parametrization. The joint state chance constraints are conservatively approximated as a collection of individual chance constraints that are subsequently relaxed via the Cantelli-Chebyshev inequality. Feasibility of the SOCP is guaranteed by softening the approximated chance constraints using the exact penalty function method. Closed-loop stability in a stochastic sense is established by establishing that the states satisfy a geometric drift condition outside of a compact set such that their variance is bounded at all times. The SMPC approach is demonstrated using a continuous acetone-butanol-ethanol fermentation process, which is used for production of high-value-added drop-in biofuels.
120 - Matthew Tsao , Ramon Iglesias , 2018
This paper presents a stochastic, model predictive control (MPC) algorithm that leverages short-term probabilistic forecasts for dispatching and rebalancing Autonomous Mobility-on-Demand systems (AMoD, i.e. fleets of self-driving vehicles). We first present the core stochastic optimization problem in terms of a time-expanded network flow model. Then, to ameliorate its tractability, we present two key relaxations. First, we replace the original stochastic problem with a Sample Average Approximation (SAA), and characterize the performance guarantees. Second, we separate the controller into two separate parts to address the task of assigning vehicles to the outstanding customers separate from that of rebalancing. This enables the problem to be solved as two totally unimodular linear programs, and thus easily scalable to large problem sizes. Finally, we test the proposed algorithm in two scenarios based on real data and show that it outperforms prior state-of-the-art algorithms. In particular, in a simulation using customer data from DiDi Chuxing, the algorithm presented here exhibits a 62.3 percent reduction in customer waiting time compared to state of the art non-stochastic algorithms.
A new distributed MPC algorithm for the regulation of dynamically coupled subsystems is presented in this paper. The current control action is computed via two robust controllers working in a nested fashion. The inner controller builds a nominal refe rence trajectory from a decentralized perspective. The outer controller uses this information to take into account the effects of the coupling and generate a distributed control action. The tube-based approach to robustness is employed. A supplementary constraint is included in the outer optimization problem to provide recursive feasibility of the overall controller
The coordination of highly automated vehicles (or agents) in road intersections is an inherently nonconvex and challenging problem. In this paper, we propose a distributed motion planning scheme under reasonable vehicle-to-vehicle communication requi rements. Each agent solves a nonlinear model predictive control problem in real time and transmits its planned trajectory to other agents, which may have conflicting objectives. The problem formulation is augmented with conditional constraints that enable the agents to decide whether to wait at a stopping line, if safe crossing is not possible. The involved nonconvex problems are solved very efficiently using the proximal averaged Newton method for optimal control (PANOC). We demonstrate the efficiency of the proposed approach in a realistic intersection crossing scenario.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا