ترغب بنشر مسار تعليمي؟ اضغط هنا

Robustness and Consistency in Linear Quadratic Control with Predictions

83   0   0.0 ( 0 )
 نشر من قبل Tongxin Li
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of learning-augmented predictive linear quadratic control. Our goal is to design a controller that balances consistency, which measures the competitive ratio when predictions are accurate, and robustness, which bounds the competitive ratio when predictions are inaccurate. We propose a novel $lambda$-confident controller and prove that it maintains a competitive ratio upper bound of $1+min{O(lambda^2varepsilon)+ O(1-lambda)^2,O(1)+O(lambda^2)}$ where $lambdain [0,1]$ is a trust parameter set based on the confidence in the predictions, and $varepsilon$ is the prediction error. Further, we design a self-tuning policy that adaptively learns the trust parameter $lambda$ with a regret that depends on $varepsilon$ and the variation of perturbations and predictions.



قيم البحث

اقرأ أيضاً

Controlling network systems has become a problem of paramount importance. Optimally controlling a network system with linear dynamics and minimizing a quadratic cost is a particular case of the well-studied linear-quadratic problem. When the specific topology of the network system is ignored, the optimal controller is readily available. However, this results in a emph{centralized} controller, facing limitations in terms of implementation and scalability. Finding the optimal emph{distributed} controller, on the other hand, is intractable in the general case. In this paper, we propose the use of graph neural networks (GNNs) to parametrize and design a distributed controller. GNNs exhibit many desirable properties, such as being naturally distributed and scalable. We cast the distributed linear-quadratic problem as a self-supervised learning problem, which is then used to train the GNN-based controllers. We also obtain sufficient conditions for the resulting closed-loop system to be input-state stable, and derive an upper bound on the trajectory deviation when the system is not accurately known. We run extensive simulations to study the performance of GNN-based distributed controllers and show that they are computationally efficient and scalable.
We propose a new risk-constrained reformulation of the standard Linear Quadratic Regulator (LQR) problem. Our framework is motivated by the fact that the classical (risk-neutral) LQR controller, although optimal in expectation, might be ineffective u nder relatively infrequent, yet statistically significant (risky) events. To effectively trade between average and extreme event performance, we introduce a new risk constraint, which explicitly restricts the total expected predictive variance of the state penalty by a user-prescribed level. We show that, under rather minimal conditions on the process noise (i.e., finite fourth-order moments), the optimal risk-aware controller can be evaluated explicitly and in closed form. In fact, it is affine relative to the state, and is always internally stable regardless of parameter tuning. Our new risk-aware controller: i) pushes the state away from directions where the noise exhibits heavy tails, by exploiting the third-order moment (skewness) of the noise; ii) inflates the state penalty in riskier directions, where both the noise covariance and the state penalty are simultaneously large. The properties of the proposed risk-aware LQR framework are also illustrated via indicative numerical examples.
The linear-quadratic controller is one of the fundamental problems in control theory. The optimal solution is a linear controller that requires access to the state of the entire system at any given time. When considering a network system, this render s the optimal controller a centralized one. The interconnected nature of a network system often demands a distributed controller, where different components of the system are controlled based only on local information. Unlike the classical centralized case, obtaining the optimal distributed controller is usually an intractable problem. Thus, we adopt a graph neural network (GNN) as a parametrization of distributed controllers. GNNs are naturally local and have distributed architectures, making them well suited for learning nonlinear distributed controllers. By casting the linear-quadratic problem as a self-supervised learning problem, we are able to find the best GNN-based distributed controller. We also derive sufficient conditions for the resulting closed-loop system to be stable. We run extensive simulations to study the performance of GNN-based distributed controllers and showcase that they are a computationally efficient parametrization with scalability and transferability capabilities.
In this paper, we propose a new control barrier function based quadratic program for general nonlinear control-affine systems, which, without any assumptions other than those taken in the original program, simultaneously guarantees forward invariance of the safety set, complete elimination of undesired equilibrium points inside it, and local asymptotic stability of the origin. To better appreciate this result, we first characterize the equilibrium points of the closed-loop system with the original quadratic program formulation. We then provide analytical results on how a certain parameter in the original quadratic program should be chosen to remove the undesired equilibrium points or to confine them in a small neighborhood of the origin. The new formulation then follows from these analytical results. Numerical examples are given alongside the theoretical discussions.
428 - Ugo Rosolia , Xiaojing Zhang , 2019
A robust Learning Model Predictive Controller (LMPC) for uncertain systems performing iterative tasks is presented. At each iteration of the control task the closed-loop state, input and cost are stored and used in the controller design. This paper f irst illustrates how to construct robust invariant sets and safe control policies exploiting historical data. Then, we propose an iterative LMPC design procedure, where data generated by a robust controller at iteration $j$ are used to design a robust LMPC at the next $j+1$ iteration. We show that this procedure allows us to iteratively enlarge the domain of the control policy and it guarantees recursive constraints satisfaction, input to state stability and performance bounds for the certainty equivalent closed-loop system. The use of an adaptive prediction horizon is the key element of the proposed design. The effectiveness of the proposed control scheme is illustrated on a linear system subject to bounded additive disturbance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا