ترغب بنشر مسار تعليمي؟ اضغط هنا

Reference tracking stochastic model predictive control over unreliable channels and bounded control actions

422   0   0.0 ( 0 )
 نشر من قبل Prabhat K Mishra
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

A stochastic model predictive control framework over unreliable Bernoulli communication channels, in the presence of unbounded process noise and under bounded control inputs, is presented for tracking a reference signal. The data losses in the control channel are compensated by a carefully designed transmission protocol, and that of the sensor channel by a dropout compensator. A class of saturated, disturbance feedback policies is proposed for control in the presence of noisy dropout compensation. A reference governor is employed to generate trackable reference trajectories and stability constraints are employed to ensure mean-square boundedness of the reference tracking error. The overall approach yields a computationally tractable quadratic program, which can be iteratively solved online.

قيم البحث

اقرأ أيضاً

114 - Christoph Mark , Steven Liu 2021
In this paper, we propose a chance constrained stochastic model predictive control scheme for reference tracking of distributed linear time-invariant systems with additive stochastic uncertainty. The chance constraints are reformulated analytically b ased on mean-variance information, where we design suitable Probabilistic Reachable Sets for constraint tightening. Furthermore, the chance constraints are proven to be satisfied in closed-loop operation. The design of an invariant set for tracking complements the controller and ensures convergence to arbitrary admissible reference points, while a conditional initialization scheme provides the fundamental property of recursive feasibility. The paper closes with a numerical example, highlighting the convergence to changing output references and empirical constraint satisfaction.
We investigate constrained optimal control problems for linear stochastic dynamical systems evolving in discrete time. We consider minimization of an expected value cost over a finite horizon. Hard constraints are introduced first, and then reformula ted in terms of probabilistic constraints. It is shown that, for a suitable parametrization of the control policy, a wide class of the resulting optimization problems are convex, or admit reasonable convex approximations.
In this paper the optimal control of alignment models composed by a large number of agents is investigated in presence of a selective action of a controller, acting in order to enhance consensus. Two types of selective controls have been presented: a n homogeneous control filtered by a selective function and a distributed control active only on a selective set. As a first step toward a reduction of computational cost, we introduce a model predictive control (MPC) approximation by deriving a numerical scheme with a feedback selective constrained dynamics. Next, in order to cope with the numerical solution of a large number of interacting agents, we derive the mean-field limit of the feedback selective constrained dynamics, which eventually will be solved numerically by means of a stochastic algorithm, able to simulate efficiently the selective constrained dynamics. Finally, several numerical simulations are reported to show the efficiency of the proposed techniques.
Stochastic uncertainties in complex dynamical systems lead to variability of system states, which can in turn degrade the closed-loop performance. This paper presents a stochastic model predictive control approach for a class of nonlinear systems wit h unbounded stochastic uncertainties. The control approach aims to shape probability density function of the stochastic states, while satisfying input and joint state chance constraints. Closed-loop stability is ensured by designing a stability constraint in terms of a stochastic control Lyapunov function, which explicitly characterizes stability in a probabilistic sense. The Fokker-Planck equation is used for describing the dynamic evolution of the states probability density functions. Complete characterization of probability density functions using the Fokker-Planck equation allows for shaping the states density functions as well as direct computation of joint state chance constraints. The closed-loop performance of the stochastic control approach is demonstrated using a continuous stirred-tank reactor.
154 - Chao Shang , Fengqi You 2018
Stochastic model predictive control (SMPC) has been a promising solution to complex control problems under uncertain disturbances. However, traditional SMPC approaches either require exact knowledge of probabilistic distributions, or rely on massive scenarios that are generated to represent uncertainties. In this paper, a novel scenario-based SMPC approach is proposed by actively learning a data-driven uncertainty set from available data with machine learning techniques. A systematical procedure is then proposed to further calibrate the uncertainty set, which gives appropriate probabilistic guarantee. The resulting data-driven uncertainty set is more compact than traditional norm-based sets, and can help reducing conservatism of control actions. Meanwhile, the proposed method requires less data samples than traditional scenario-based SMPC approaches, thereby enhancing the practicability of SMPC. Finally the optimal control problem is cast as a single-stage robust optimization problem, which can be solved efficiently by deriving the robust counterpart problem. The feasibility and stability issue is also discussed in detail. The efficacy of the proposed approach is demonstrated through a two-mass-spring system and a building energy control problem under uncertain disturbances.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا