Do you want to publish a course? Click here

On convex problems in chance-constrained stochastic model predictive control

259   0   0.0 ( 0 )
 Added by Debasish Chatterjee
 Publication date 2009
  fields
and research's language is English




Ask ChatGPT about the research

We investigate constrained optimal control problems for linear stochastic dynamical systems evolving in discrete time. We consider minimization of an expected value cost over a finite horizon. Hard constraints are introduced first, and then reformulated in terms of probabilistic constraints. It is shown that, for a suitable parametrization of the control policy, a wide class of the resulting optimization problems are convex, or admit reasonable convex approximations.



rate research

Read More

This article considers the stochastic optimal control of discrete-time linear systems subject to (possibly) unbounded stochastic disturbances, hard constraints on the manipulated variables, and joint chance constraints on the states. A tractable convex second-order cone program (SOCP) is derived for calculating the receding-horizon control law at each time step. Feedback is incorporated during prediction by parametrizing the control law as an affine function of the disturbances. Hard input constraints are guaranteed by saturating the disturbances that appear in the control law parametrization. The joint state chance constraints are conservatively approximated as a collection of individual chance constraints that are subsequently relaxed via the Cantelli-Chebyshev inequality. Feasibility of the SOCP is guaranteed by softening the approximated chance constraints using the exact penalty function method. Closed-loop stability in a stochastic sense is established by establishing that the states satisfy a geometric drift condition outside of a compact set such that their variance is bounded at all times. The SMPC approach is demonstrated using a continuous acetone-butanol-ethanol fermentation process, which is used for production of high-value-added drop-in biofuels.
We present an algorithm for robust model predictive control with consideration of uncertainty and safety constraints. Our framework considers a nonlinear dynamical system subject to disturbances from an unknown but bounded uncertainty set. By viewing the system as a fixed point of an operator acting over trajectories, we propose a convex condition on control actions that guarantee safety against the uncertainty set. The proposed condition guarantees that all realizations of the state trajectories satisfy safety constraints. Our algorithm solves a sequence of convex quadratic constrained optimization problems of size n*N, where n is the number of states, and N is the prediction horizon in the model predictive control problem. Compared to existing methods, our approach solves convex problems while guaranteeing that all realizations of uncertainty set do not violate safety constraints. Moreover, we consider the implicit time-discretization of system dynamics to increase the prediction horizon and enhance computational accuracy. Numerical simulations for vehicle navigation demonstrate the effectiveness of our approach.
For optimal power flow problems with chance constraints, a particularly effective method is based on a fixed point iteration applied to a sequence of deterministic power flow problems. However, a priori, the convergence of such an approach is not necessarily guaranteed. This article analyses the convergence conditions for this fixed point approach, and reports numerical experiments including for large IEEE networks.
A stochastic model predictive control framework over unreliable Bernoulli communication channels, in the presence of unbounded process noise and under bounded control inputs, is presented for tracking a reference signal. The data losses in the control channel are compensated by a carefully designed transmission protocol, and that of the sensor channel by a dropout compensator. A class of saturated, disturbance feedback policies is proposed for control in the presence of noisy dropout compensation. A reference governor is employed to generate trackable reference trajectories and stability constraints are employed to ensure mean-square boundedness of the reference tracking error. The overall approach yields a computationally tractable quadratic program, which can be iteratively solved online.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا