Do you want to publish a course? Click here

Distributed Linear-Quadratic Control with Graph Neural Networks

89   0   0.0 ( 0 )
 Added by Fernando Gama
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Controlling network systems has become a problem of paramount importance. Optimally controlling a network system with linear dynamics and minimizing a quadratic cost is a particular case of the well-studied linear-quadratic problem. When the specific topology of the network system is ignored, the optimal controller is readily available. However, this results in a emph{centralized} controller, facing limitations in terms of implementation and scalability. Finding the optimal emph{distributed} controller, on the other hand, is intractable in the general case. In this paper, we propose the use of graph neural networks (GNNs) to parametrize and design a distributed controller. GNNs exhibit many desirable properties, such as being naturally distributed and scalable. We cast the distributed linear-quadratic problem as a self-supervised learning problem, which is then used to train the GNN-based controllers. We also obtain sufficient conditions for the resulting closed-loop system to be input-state stable, and derive an upper bound on the trajectory deviation when the system is not accurately known. We run extensive simulations to study the performance of GNN-based distributed controllers and show that they are computationally efficient and scalable.



rate research

Read More

The linear-quadratic controller is one of the fundamental problems in control theory. The optimal solution is a linear controller that requires access to the state of the entire system at any given time. When considering a network system, this renders the optimal controller a centralized one. The interconnected nature of a network system often demands a distributed controller, where different components of the system are controlled based only on local information. Unlike the classical centralized case, obtaining the optimal distributed controller is usually an intractable problem. Thus, we adopt a graph neural network (GNN) as a parametrization of distributed controllers. GNNs are naturally local and have distributed architectures, making them well suited for learning nonlinear distributed controllers. By casting the linear-quadratic problem as a self-supervised learning problem, we are able to find the best GNN-based distributed controller. We also derive sufficient conditions for the resulting closed-loop system to be stable. We run extensive simulations to study the performance of GNN-based distributed controllers and showcase that they are a computationally efficient parametrization with scalability and transferability capabilities.
Dynamical systems consisting of a set of autonomous agents face the challenge of having to accomplish a global task, relying only on local information. While centralized controllers are readily available, they face limitations in terms of scalability and implementation, as they do not respect the distributed information structure imposed by the network system of agents. Given the difficulties in finding optimal decentralized controllers, we propose a novel framework using graph neural networks (GNNs) to emph{learn} these controllers. GNNs are well-suited for the task since they are naturally distributed architectures and exhibit good scalability and transferability properties. The problems of flocking and multi-agent path planning are explored to illustrate the potential of GNNs in learning decentralized controllers.
Developing effective strategies to rapidly support grid frequency while minimizing loss in case of severe contingencies is an important requirement in power systems. While distributed responsive load demands are commonly adopted for frequency regulation, it is difficult to achieve both rapid response and global accuracy in a practical and cost-effective manner. In this paper, the cyber-physical design of an Internet-of-Things (IoT) enabled system, called Grid Sense, is presented. Grid Sense utilizes a large number of distributed appliances for frequency emergency support. It features a local power loss $Delta P$ estimation approach for frequency emergency control based on coordinated edge intelligence. The specifically designed smart outlets of Grid Sense detect the frequency disturbance event locally using the parameters sent from the control center to estimate active power loss in the system and to make rapid and accurate switching decisions soon after a severe contingency. Based on a modified IEEE 24-bus system, numerical simulations and hardware experiments are conducted to demonstrate the frequency support performance of Grid Sense in the aspects of accuracy and speed. It is shown that Grid Sense equipped with its local $Delta P$-estimation frequency control approach can accurately and rapidly prevent the drop of frequency after a major power loss.
We study the problem of learning-augmented predictive linear quadratic control. Our goal is to design a controller that balances consistency, which measures the competitive ratio when predictions are accurate, and robustness, which bounds the competitive ratio when predictions are inaccurate. We propose a novel $lambda$-confident controller and prove that it maintains a competitive ratio upper bound of $1+min{O(lambda^2varepsilon)+ O(1-lambda)^2,O(1)+O(lambda^2)}$ where $lambdain [0,1]$ is a trust parameter set based on the confidence in the predictions, and $varepsilon$ is the prediction error. Further, we design a self-tuning policy that adaptively learns the trust parameter $lambda$ with a regret that depends on $varepsilon$ and the variation of perturbations and predictions.
We propose a new risk-constrained reformulation of the standard Linear Quadratic Regulator (LQR) problem. Our framework is motivated by the fact that the classical (risk-neutral) LQR controller, although optimal in expectation, might be ineffective under relatively infrequent, yet statistically significant (risky) events. To effectively trade between average and extreme event performance, we introduce a new risk constraint, which explicitly restricts the total expected predictive variance of the state penalty by a user-prescribed level. We show that, under rather minimal conditions on the process noise (i.e., finite fourth-order moments), the optimal risk-aware controller can be evaluated explicitly and in closed form. In fact, it is affine relative to the state, and is always internally stable regardless of parameter tuning. Our new risk-aware controller: i) pushes the state away from directions where the noise exhibits heavy tails, by exploiting the third-order moment (skewness) of the noise; ii) inflates the state penalty in riskier directions, where both the noise covariance and the state penalty are simultaneously large. The properties of the proposed risk-aware LQR framework are also illustrated via indicative numerical examples.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا