Do you want to publish a course? Click here

Lyapunov-based Stochastic Nonlinear Model Predictive Control: Shaping the State Probability Density Functions

304   0   0.0 ( 0 )
 Added by Ali Mesbah
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Stochastic uncertainties in complex dynamical systems lead to variability of system states, which can in turn degrade the closed-loop performance. This paper presents a stochastic model predictive control approach for a class of nonlinear systems with unbounded stochastic uncertainties. The control approach aims to shape probability density function of the stochastic states, while satisfying input and joint state chance constraints. Closed-loop stability is ensured by designing a stability constraint in terms of a stochastic control Lyapunov function, which explicitly characterizes stability in a probabilistic sense. The Fokker-Planck equation is used for describing the dynamic evolution of the states probability density functions. Complete characterization of probability density functions using the Fokker-Planck equation allows for shaping the states density functions as well as direct computation of joint state chance constraints. The closed-loop performance of the stochastic control approach is demonstrated using a continuous stirred-tank reactor.



rate research

Read More

154 - Chao Shang , Fengqi You 2018
Stochastic model predictive control (SMPC) has been a promising solution to complex control problems under uncertain disturbances. However, traditional SMPC approaches either require exact knowledge of probabilistic distributions, or rely on massive scenarios that are generated to represent uncertainties. In this paper, a novel scenario-based SMPC approach is proposed by actively learning a data-driven uncertainty set from available data with machine learning techniques. A systematical procedure is then proposed to further calibrate the uncertainty set, which gives appropriate probabilistic guarantee. The resulting data-driven uncertainty set is more compact than traditional norm-based sets, and can help reducing conservatism of control actions. Meanwhile, the proposed method requires less data samples than traditional scenario-based SMPC approaches, thereby enhancing the practicability of SMPC. Finally the optimal control problem is cast as a single-stage robust optimization problem, which can be solved efficiently by deriving the robust counterpart problem. The feasibility and stability issue is also discussed in detail. The efficacy of the proposed approach is demonstrated through a two-mass-spring system and a building energy control problem under uncertain disturbances.
Move blocking (MB) is a widely used strategy to reduce the degrees of freedom of the Optimal Control Problem (OCP) arising in receding horizon control. The size of the OCP is reduced by forcing the input variables to be constant over multiple discretization steps. In this paper, we focus on developing computationally efficient MB schemes for multiple shooting based nonlinear model predictive control (NMPC). The degrees of freedom of the OCP is reduced by introducing MB in the shooting step, resulting in a smaller but sparse OCP. Therefore, the discretization accuracy and level of sparsity is maintained. A condensing algorithm that exploits the sparsity structure of the OCP is proposed, that allows to reduce the computation complexity of condensing from quadratic to linear in the number of discretization nodes. As a result, active-set methods with warm-start strategy can be efficiently employed, thus allowing the use of a longer prediction horizon. A detailed comparison between the proposed scheme and the nonuniform grid NMPC is given. Effectiveness of the algorithm in reducing computational burden while maintaining optimization accuracy and constraints fulfillment is shown by means of simulations with two different problems.
This article considers the stochastic optimal control of discrete-time linear systems subject to (possibly) unbounded stochastic disturbances, hard constraints on the manipulated variables, and joint chance constraints on the states. A tractable convex second-order cone program (SOCP) is derived for calculating the receding-horizon control law at each time step. Feedback is incorporated during prediction by parametrizing the control law as an affine function of the disturbances. Hard input constraints are guaranteed by saturating the disturbances that appear in the control law parametrization. The joint state chance constraints are conservatively approximated as a collection of individual chance constraints that are subsequently relaxed via the Cantelli-Chebyshev inequality. Feasibility of the SOCP is guaranteed by softening the approximated chance constraints using the exact penalty function method. Closed-loop stability in a stochastic sense is established by establishing that the states satisfy a geometric drift condition outside of a compact set such that their variance is bounded at all times. The SMPC approach is demonstrated using a continuous acetone-butanol-ethanol fermentation process, which is used for production of high-value-added drop-in biofuels.
Self-triggered control (STC) is a well-established technique to reduce the amount of samples for sampled-data systems, and is hence particularly useful for Networked Control Systems. At each sampling instant, an STC mechanism determines not only an updated control input but also when the next sample should be taken. In this paper, a dynamic STC mechanism for nonlinear systems is proposed. The mechanism incorporates a dynamic variable for determining the next sampling instant. Such a dynamic variable for the trigger decision has been proven to be a powerful tool for increasing sampling intervals in the closely related concept of event-triggered control, but was so far not exploited for STC. This gap is closed in this paper. For the proposed mechanism, the dynamic variable is chosen to be the filtered values of the Lyapunov function at past sampling instants. The next sampling instant is, based on the dynamic variable and on hybrid Lyapunov function techniques, chosen such that an average decrease of the Lyapunov function is ensured. The proposed mechanism is illustrated with a numerical example from the literature. For this example, the obtained sampling intervals are significantly larger than for existing static STC mechanisms. This paper is the accepted version of [1], containing also proofs of the main results.
114 - Christoph Mark , Steven Liu 2021
In this paper, we propose a chance constrained stochastic model predictive control scheme for reference tracking of distributed linear time-invariant systems with additive stochastic uncertainty. The chance constraints are reformulated analytically based on mean-variance information, where we design suitable Probabilistic Reachable Sets for constraint tightening. Furthermore, the chance constraints are proven to be satisfied in closed-loop operation. The design of an invariant set for tracking complements the controller and ensures convergence to arbitrary admissible reference points, while a conditional initialization scheme provides the fundamental property of recursive feasibility. The paper closes with a numerical example, highlighting the convergence to changing output references and empirical constraint satisfaction.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا