ﻻ يوجد ملخص باللغة العربية
The transition to a renewable energy system poses challenges for power grid operation and stability. Secondary control is key in restoring the power system to its reference following a disturbance. Underestimating the necessary control capacity may require emergency measures, such as load shedding. Hence, a solid understanding of the emerging risks and the driving factors of control is needed. In this contribution, we establish an explainable machine learning model for the activation of secondary control power in Germany. Training gradient boosted trees, we obtain an accurate description of control activation. Using SHapely Additive exPlanation (SHAP) values, we investigate the dependency between control activation and external features such as the generation mix, forecasting errors, and electricity market data. Thereby, our analysis reveals drivers that lead to high reserve requirements in the German power system. Our transparent approach, utilizing open data and making machine learning models interpretable, opens new scientific discovery avenues.
Stable operation of the electrical power system requires the power grid frequency to stay within strict operational limits. With millions of consumers and thousands of generators connected to a power grid, detailed human-build models can no longer ca
We consider the problem of controlling an unknown linear quadratic Gaussian (LQG) system consisting of multiple subsystems connected over a network. Our goal is to minimize and quantify the regret (i.e. loss in performance) of our strategy with respe
This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the
The ongoing energy transition challenges the stability of the electrical power system. Stable operation of the electrical power grid requires both the voltage (amplitude) and the frequency to stay within operational bounds. While much research has fo
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While opening the opaque box is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, w