Do you want to publish a course? Click here

A real-time distributed post-disaster restoration planning strategy for distribution networks

97   0   0.0 ( 0 )
 Added by Jianfeng Fu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

After disasters, distribution networks have to be restored by repair, reconfiguration, and power dispatch. During the restoration process, changes can occur in real time that deviate from the situations considered in pre-designed planning strategies. That may result in the pre-designed plan to become far from optimal or even unimplementable. This paper proposes a centralized-distributed bi-level optimization method to solve the real-time restoration planning problem. The first level determines integer variables related to routing of the crews and the status of the switches using a genetic algorithm (GA), while the second level determines the dispatch of active/reactive power by using distributed model predictive control (DMPC). A novel Aitken- DMPC solver is proposed to accelerate convergence and to make the method suitable for real-time decision making. A case study based on the IEEE 123-bus system is considered, and the acceleration performance of the proposed Aitken-DMPC solver is evaluated and compared with the standard DMPC method.



rate research

Read More

Real-time vehicle dispatching operations in traditional car-sharing systems is an already computationally challenging scheduling problem. Electrification only exacerbates the computational difficulties as charge level constraints come into play. To overcome this complexity, we employ an online minimum drift plus penalty (MDPP) approach for SAEV systems that (i) does not require a priori knowledge of customer arrival rates to the different parts of the system (i.e. it is practical from a real-world deployment perspective), (ii) ensures the stability of customer waiting times, (iii) ensures that the deviation of dispatch costs from a desirable dispatch cost can be controlled, and (iv) has a computational time-complexity that allows for real-time implementation. Using an agent-based simulator developed for SAEV systems, we test the MDPP approach under two scenarios with real-world calibrated demand and charger distributions: 1) a low-demand scenario with long trips, and 2) a high-demand scenario with short trips. The comparisons with other algorithms under both scenarios show that the proposed online MDPP outperforms all other algorithms in terms of both reduced customer waiting times and vehicle dispatching costs.
111 - Hang Shuai , Member , IEEE 2021
The uncertainties from distributed energy resources (DERs) bring significant challenges to the real-time operation of microgrids. In addition, due to the nonlinear constraints in the AC power flow equation and the nonlinearity of the battery storage model, etc., the optimization of the microgrid is a mixed-integer nonlinear programming (MINLP) problem. It is challenging to solve this kind of stochastic nonlinear optimization problem. To address the challenge, this paper proposes a deep reinforcement learning (DRL) based optimization strategy for the real-time operation of the microgrid. Specifically, we construct the detailed operation model for the microgrid and formulate the real-time optimization problem as a Markov Decision Process (MDP). Then, a double deep Q network (DDQN) based architecture is designed to solve the MINLP problem. The proposed approach can learn a near-optimal strategy only from the historical data. The effectiveness of the proposed algorithm is validated by the simulations on a 10-bus microgrid system and a modified IEEE 69-bus microgrid system. The numerical simulation results demonstrate that the proposed approach outperforms several existing methods.
The topic of this paper is the design of a fully distributed and real-time capable control scheme for the automation of road intersections. State of the art Vehicle-to-Vehicle (V2V) communication technology is adopted. Vehicles distributively negotiate crossing priorities by a Consensus-Based Auction Algorithm (CBAA-M). Then, each agent solves a nonlinear Model Predictive Control (MPC) problem that computes the optimal trajectory avoiding collisions with higher priority vehicles and deciding the crossing order. The scheme is shown to be real-time capable and able to respond to sudden priority changes, e.g. if a vehicle gets an emergency call. Simulations reinforce theoretical results.
The uncertainty in distributed renewable generation poses security threats to the real-time operation of distribution systems. The real-time dispatchable region (RTDR) can be used to assess the ability of power systems to accommodate renewable generation at a given base point. DC and linearized AC power flow models are typically used for bulk power systems, but they are not suitable for low-voltage distribution networks with large r/x ratios. To balance accuracy and computational efficiency, this paper proposes an RTDR model of AC distribution networks using tight convex relaxation. Convex hull relaxation is adopted to reformulate the AC power flow equations, and the convex hull is approximated by a polyhedron without much loss of accuracy. Furthermore, an efficient adaptive constraint generation algorithm is employed to construct an approximate RTDR to meet the requirements of real-time dispatch. Case studies on the modified IEEE 33-bus distribution system validate the computational efficiency and accuracy of the proposed method.
Self-healing capability is one of the most critical factors for a resilient distribution system, which requires intelligent agents to automatically perform restorative actions online, including network reconfiguration and reactive power dispatch. These agents should be equipped with a predesigned decision policy to meet real-time requirements and handle highly complex $N-k$ scenarios. The disturbance randomness hampers the application of exploration-dominant algorithms like traditional reinforcement learning (RL), and the agent training problem under $N-k$ scenarios has not been thoroughly solved. In this paper, we propose the imitation learning (IL) framework to train such policies, where the agent will interact with an expert to learn its optimal policy, and therefore significantly improve the training efficiency compared with the RL methods. To handle tie-line operations and reactive power dispatch simultaneously, we design a hybrid policy network for such a discrete-continuous hybrid action space. We employ the 33-node system under $N-k$ disturbances to verify the proposed framework.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا