No Arabic abstract
Research into cascading failures in power-transmission networks requires detailed data on the capacity of individual transmission lines. However, these data are often unavailable to researchers. As a result, line limits are often modelled by assuming they are proportional to some average load. Little research exists, however, to support this assumption as being realistic. In this paper, we analyse the proportional-loading (PL) approach and compare it to two linear models that use voltage and initial power flow as variables. In conducting this modelling, we test the ability of artificial line limits to model true line limits, the damage done during an attack and the order in which edges are lost. we also test how accurately these methods rank the relative performance of different attack strategies. We find that the linear models are the top-performing method or close to the top in all tests. In comparison, the tolerance value that produces the best PL limits changes depending on the test. The PL approach was a particularly poor fit when the line tolerance was less than two, which is the most commonly used value range in cascading-failure research. We also find indications that the accuracy of modelling line limits does not indicate how well a model will represent grid collapse. In addition, we find evidence that the networks topology can be used to estimate the systems true mean loading. The findings of this paper provide an understanding of the weaknesses of the PL approach and offer an alternative method of line-limit modelling.
Knowledge of power grids topology during cascading failure is an essential element of centralized blackout prevention control, given that multiple islands are typically formed, as a cascade progresses. Moreover, academic research on interdependency between cyber and physical layers of the grid indicate that power failure during a cascade may lead to outages in communication networks, which progressively reduce the observable areas. These challenge the current literature on line outage detection, which assumes that the grid remains as a single connected component. We propose a new approach to eliminate that assumption. Following an islanding event, first the buses forming that connected components are identified and then further line outages within the individual islands are detected. In addition to the power system measurements, observable breaker statuses are integrated as constraints in our topology identification algorithm. The impact of error propagation on the estimation process as reliance on previous estimates keeps growing during cascade is also studied. Finally, the estimated admittance matrix is used in preventive control of cascading failure, creating a closed-loop system. The impact of such an interlinked estimation and control on that total load served is studied for the first time. Simulations in IEEE-118 bus system and 2,383-bus Polish network demonstrate the effectiveness of our approach.
Power grid data are going big with the deployment of various sensors. The big data in power grids creates huge opportunities for applying artificial intelligence technologies to improve resilience and reliability. This paper introduces multiple real-world applications based on artificial intelligence to improve power grid situational awareness and resilience. These applications include event identification, inertia estimation, event location and magnitude estimation, data authentication, control, and stability assessment. These applications are operating on a real-world system called FNET-GridEye, which is a wide-area measurement network and arguably the world-largest cyber-physical system that collects power grid big data. These applications showed much better performance compared with conventional approaches and accomplished new tasks that are impossible to realized using conventional technologies. These encouraging results demonstrate that combining power grid big data and artificial intelligence can uncover and capture the non-linear correlation between power grid data and its stabilities indices and will potentially enable many advanced applications that can significantly improve power grid resilience.
This paper provides a detailed account of the impact of different offshore wind siting strategies on the design of the European power system. To this end, a two-stage method is proposed. In the first stage, a highly-granular siting problem identifies a suitable set of sites where offshore wind plants could be deployed according to a pre-specified criterion. Two siting schemes are analysed and compared within a realistic case study. These schemes essentially select a pre-specified number of sites so as to maximise their aggregate power output and their spatiotemporal complementarity, respectively. In addition, two variants of these siting schemes are provided, wherein the number of sites to be selected is specified on a country-by-country basis rather than Europe-wide. In the second stage, the subset of previously identified sites is passed to a capacity expansion planning (CEP) framework that sizes the power generation, transmission and storage assets that should be deployed and operated in order to satisfy pre-specified electricity demand levels at minimum cost. Results show that the complementarity-based siting criterion leads to system designs which are up to 5% cheaper than the ones relying the power output-based criterion when offshore wind plants are deployed with no consideration for country-based deployment targets. On the contrary, the power output-based scheme leads to system designs which are consistently 2% cheaper than the ones leveraging the complementarity-based siting strategy when such constraints are enforced. The robustness of the results is supported by a sensitivity analysis on offshore wind capital expenditure and inter-annual weather variability, respectively.
Frequency fluctuations in power grids, caused by unpredictable renewable energy sources, consumer behavior and trading, need to be balanced to ensure stable grid operation. Standard smart grid solutions to mitigate large frequency excursions are based on centrally collecting data and give rise to security and privacy concerns. Furthermore, control of fluctuations is often tested by employing Gaussian perturbations. Here, we demonstrate that power grid frequency fluctuations are in general non-Gaussian, implying that large excursions are more likely than expected based on Gaussian modeling. We consider real power grid frequency measurements from Continental Europe and compare them to stochastic models and predictions based on Fokker-Planck equations. Furthermore, we review a decentral smart grid control scheme to limit these fluctuations. In particular, we derive a scaling law of how decentralized control actions reduce the magnitude of frequency fluctuations and demonstrate the power of these theoretical predictions using a test grid. Overall, we find that decentral smart grid control may reduce grid frequency excursions due to both Gaussian and non-Gaussian power fluctuations and thus offers an alternative pathway for mitigating fluctuation-induced risks.
Increasing penetration of renewable energy introduces significant uncertainty into power systems. Traditional simulation-based verification methods may not be applicable due to the unknown-but-bounded feature of the uncertainty sets. Emerging set-theoretic methods have been intensively investigated to tackle this challenge. The paper comprehensively reviews these methods categorized by underlying mathematical principles, that is, set operation-based methods and passivity-based methods. Set operation-based methods are more computationally efficient, while passivity-based methods provide semi-analytical expression of reachable sets, which can be readily employed for control. Other features between different methods are also discussed and illustrated by numerical examples. A benchmark example is presented and solved by different methods to verify consistency.