No Arabic abstract
We report on a real-time demand response experiment with 100 controllable devices. The experiment reveals several key challenges in the deployment of a real-time demand response program, including time delays, uncertainties, characterization errors, multiple timescales, and nonlinearity, which have been largely ignored in previous studies. To resolve these practical issues, we develop and implement a two-level multi-loop control structure integrating feed-forward proportional-integral controllers and optimization solvers in closed loops, which eliminates steady-state errors and improves the dynamical performance of the overall building response. The proposed methods are validated by Hardware-in-the-Loop (HiL) tests.
Large-scale integration of renewables in power systems gives rise to new challenges for keeping synchronization and frequency stability in volatile and uncertain power flow states. To ensure the safety of operation, the system must maintain adequate disturbance rejection capability at the time scales of both rotor angle and system frequency dynamics. This calls for flexibility to be exploited on both the generation and demand sides, compensating volatility and ensuring stability at the two separate time scales. This article proposes a hierarchical power flow control architecture that involves both transmission and distribution networks as well as individual buildings to enhance both small-signal rotor angle stability and frequency stability of the transmission network. The proposed architecture consists of a transmission-level optimizer enhancing system damping ratios, a distribution-level controller following transmission commands and providing frequency support, and a building-level scheduler accounting for quality of service and following the distribution-level targets. We validate the feasibility and performance of the whole control architecture through real-time hardware-in-loop tests involving real-world transmission and distribution network models along with real devices at the Stone Edge Farm Microgrid.
Nowadays the emerging smart grid technology opens up the possibility of two-way communication between customers and energy utilities. Demand Response Management (DRM) offers the promise of saving money for commercial customers and households while helps utilities operate more efficiently. In this paper, an Incentive-based Demand Response Optimization (IDRO) model is proposed to efficiently schedule household appliances for minimum usage during peak hours. The proposed method is a multi-objective optimization technique based on Nonlinear Auto-Regressive Neural Network (NAR-NN) which considers energy provided by the utility and rooftop installed photovoltaic (PV) system. The proposed method is tested and verified using 300 case studies (household). Data analysis for a period of one year shows a noticeable improvement in power factor and customers bill.
This paper proposes a novel end-to-end deep learning framework that simultaneously identifies demand baselines and the incentive-based agent demand response model, from the net demand measurements and incentive signals. This learning framework is modularized as two modules: 1) the decision making process of a demand response participant is represented as a differentiable optimization layer, which takes the incentive signal as input and predicts users response; 2) the baseline demand forecast is represented as a standard neural network model, which takes relevant features and predicts users baseline demand. These two intermediate predictions are integrated, to form the net demand forecast. We then propose a gradient-descent approach that backpropagates the net demand forecast errors to update the weights of the agent model and the weights of baseline demand forecast, jointly. We demonstrate the effectiveness of our approach through computation experiments with synthetic demand response traces and a large-scale real world demand response dataset. Our results show that the approach accurately identifies the demand response model, even without any prior knowledge about the baseline demand.
Motivated by FERCs recent direction and ever-growing interest in cloud adoption by power utilities, a Task Force was established to assist power system practitioners with secure, reliable and cost-effective adoption of cloud technology to meet various business needs. This paper summarizes the business drivers, challenges, guidance, and best practices for cloud adoption in power systems from the Task Forces perspective, after extensive review and deliberation by its members that include grid operators, utility companies, software vendors and cloud providers. The paper begins by enumerating various business drivers for cloud adoption in the power industry. It follows with the discussion of challenges and risks of migrating power grid utility workloads to cloud. Next for each corresponding challenge or risk, the paper provides appropriate guidance. Importantly, the guidance is directed toward power industry professionals who are considering cloud solutions and are yet hesitant about the practical execution. Finally, to tie all the sections together, the paper documents various real-world use cases of cloud technology in the power system domain, which both the power industry practitioners and software vendors can look forward to design and select their own future cloud solutions. We hope that the information in this paper will serve as useful guidance for the development of NERC guidelines and standards relevant to cloud adoption in the industry.
Self-healing capability is one of the most critical factors for a resilient distribution system, which requires intelligent agents to automatically perform restorative actions online, including network reconfiguration and reactive power dispatch. These agents should be equipped with a predesigned decision policy to meet real-time requirements and handle highly complex $N-k$ scenarios. The disturbance randomness hampers the application of exploration-dominant algorithms like traditional reinforcement learning (RL), and the agent training problem under $N-k$ scenarios has not been thoroughly solved. In this paper, we propose the imitation learning (IL) framework to train such policies, where the agent will interact with an expert to learn its optimal policy, and therefore significantly improve the training efficiency compared with the RL methods. To handle tie-line operations and reactive power dispatch simultaneously, we design a hybrid policy network for such a discrete-continuous hybrid action space. We employ the 33-node system under $N-k$ disturbances to verify the proposed framework.