Do you want to publish a course? Click here

Input Convex Neural Networks for Optimal Voltage Regulation

83   0   0.0 ( 0 )
 Added by Yize Chen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The increasing penetration of renewables in distribution networks calls for faster and more advanced voltage regulation strategies. A promising approach is to formulate the problem as an optimization problem, where the optimal reactive power injection from inverters are calculated to maintain the voltages while satisfying power network constraints. However, existing optimization algorithms require the exact topology and line parameters of underlying distribution system, which are not known for most cases and are difficult to infer. In this paper, we propose to use specifically designed neural network to tackle the learning and optimization problem together. In the training stage, the proposed input convex neural network learns the mapping between the power injections and the voltages. In the voltage regulation stage, such trained network can find the optimal reactive power injections by design. We also provide a practical distributed algorithm by using the trained neural network. Theoretical bounds on the representation performance and learning efficiency of proposed model are also discussed. Numerical simulations on multiple test systems are conducted to illustrate the operation of the algorithm.



rate research

Read More

Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5* less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.
Hybrid AC/DC networks are a key technology for future electrical power systems, due to the increasing number of converter-based loads and distributed energy resources. In this paper, we consider the design of control schemes for hybrid AC/DC networks, focusing especially on the control of the interlinking converters (ILC(s)). We present two control schemes: firstly for decentralized primary control, and secondly, a distributed secondary controller. In the primary case, the stability of the controlled system is proven in a general hybrid AC/DC network which may include asynchronous AC subsystems. Furthermore, it is demonstrated that power-sharing across the AC/DC network is significantly improved compared to previously proposed dual droop control. The proposed scheme for secondary control guarantees the convergence of the AC system frequencies and the average DC voltage of each DC subsystem to their nominal values respectively. An optimal power allocation is also achieved at steady-state. The applicability and effectiveness of the proposed algorithms are verified by simulation on a test hybrid AC/DC network in MATLAB / Simscape Power Systems.
Gradient flows are a powerful tool for optimizing functionals in general metric spaces, including the space of probabilities endowed with the Wasserstein metric. A typical approach to solving this optimization problem relies on its connection to the dynamic formulation of optimal transport and the celebrated Jordan-Kinderlehrer-Otto (JKO) scheme. However, this formulation involves optimization over convex functions, which is challenging, especially in high dimensions. In this work, we propose an approach that relies on the recently introduced input-convex neural networks (ICNN) to parameterize the space of convex functions in order to approximate the JKO scheme, as well as in designing functionals over measures that enjoy convergence guarantees. We derive a computationally efficient implementation of this JKO-ICNN framework and use various experiments to demonstrate its feasibility and validity in approximating solutions of low-dimensional partial differential equations with known solutions. We also explore the use of our JKO-ICNN approach in high dimensions with an experiment in controlled generation for molecular discovery.
In this paper, we consider a discrete-time stochastic control problem with uncertain initial and target states. We first discuss the connection between optimal transport and stochastic control problems of this form. Next, we formulate a linear-quadratic regulator problem where the initial and terminal states are distributed according to specified probability densities. A closed-form solution for the optimal transport map in the case of linear-time varying systems is derived, along with an algorithm for computing the optimal map. Two numerical examples pertaining to swarm deployment demonstrate the practical applicability of the model, and performance of the numerical method.
In this paper we propose a new computational method for designing optimal regulators for high-dimensional nonlinear systems. The proposed approach leverages physics-informed machine learning to solve high-dimensional Hamilton-Jacobi-Bellman equations arising in optimal feedback control. Concretely, we augment linear quadratic regulators with neural networks to handle nonlinearities. We train the augmented models on data generated without discretizing the state space, enabling application to high-dimensional problems. We use the proposed method to design a candidate optimal regulator for an unstable Burgers equation, and through this example, demonstrate improved robustness and accuracy compared to existing neural network formulations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا