ترغب بنشر مسار تعليمي؟ اضغط هنا

Secondary control activation analysed and predicted with explainable AI

234   0   0.0 ( 0 )
 نشر من قبل Johannes Kruse
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The transition to a renewable energy system poses challenges for power grid operation and stability. Secondary control is key in restoring the power system to its reference following a disturbance. Underestimating the necessary control capacity may require emergency measures, such as load shedding. Hence, a solid understanding of the emerging risks and the driving factors of control is needed. In this contribution, we establish an explainable machine learning model for the activation of secondary control power in Germany. Training gradient boosted trees, we obtain an accurate description of control activation. Using SHapely Additive exPlanation (SHAP) values, we investigate the dependency between control activation and external features such as the generation mix, forecasting errors, and electricity market data. Thereby, our analysis reveals drivers that lead to high reserve requirements in the German power system. Our transparent approach, utilizing open data and making machine learning models interpretable, opens new scientific discovery avenues.



قيم البحث

اقرأ أيضاً

Stable operation of the electrical power system requires the power grid frequency to stay within strict operational limits. With millions of consumers and thousands of generators connected to a power grid, detailed human-build models can no longer ca pture the full dynamics of this complex system. Modern machine learning algorithms provide a powerful alternative for system modelling and prediction, but the intrinsic black-box character of many models impedes scientific insights and poses severe security risks. Here, we show how eXplainable AI (XAI) alleviates these problems by revealing critical dependencies and influences on the power grid frequency. We accurately predict frequency stability indicators (such as RoCoF and Nadir) for three major European synchronous areas and identify key features that determine the power grid stability. Load ramps, specific generation ramps but also prices and forecast errors are central to understand and stabilize the power grid.
We consider the problem of controlling an unknown linear quadratic Gaussian (LQG) system consisting of multiple subsystems connected over a network. Our goal is to minimize and quantify the regret (i.e. loss in performance) of our strategy with respe ct to an oracle who knows the system model. Viewing the interconnected subsystems globally and directly using existing LQG learning algorithms for the global system results in a regret that increases super-linearly with the number of subsystems. Instead, we propose a new Thompson sampling based learning algorithm which exploits the structure of the underlying network. We show that the expected regret of the proposed algorithm is bounded by $tilde{mathcal{O}} big( n sqrt{T} big)$ where $n$ is the number of subsystems, $T$ is the time horizon and the $tilde{mathcal{O}}(cdot)$ notation hides logarithmic terms in $n$ and $T$. Thus, the regret scales linearly with the number of subsystems. We present numerical experiments to illustrate the salient features of the proposed algorithm.
This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.
The ongoing energy transition challenges the stability of the electrical power system. Stable operation of the electrical power grid requires both the voltage (amplitude) and the frequency to stay within operational bounds. While much research has fo cused on frequency dynamics and stability, the voltage dynamics has been neglected. Here, we study frequency and voltage stability in the case of the simplest network (two nodes) and an extended all-to-all network via linear stability and bulk analysis. In particular, our linear stability analysis of the network shows that the frequency secondary control guarantees the stability of a particular electric network. Even more interesting, while we only consider secondary frequency control, we observe a stabilizing effect on the voltage dynamics, especially in our numerical bulk analysis.
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While opening the opaque box is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, w e conduct a mixed-methods study of how two different groups of whos--people with and without a background in AI--perceive different types of AI explanations. These groups were chosen to look at how disparities in AI backgrounds can exacerbate the creator-consumer gap. We quantitatively share what the perceptions are along five dimensions: confidence, intelligence, understandability, second chance, and friendliness. Qualitatively, we highlight how the AI background influences each groups interpretations and elucidate why the differences might exist through the lenses of appropriation and cognitive heuristics. We find that (1) both groups had unwarranted faith in numbers, to different extents and for different reasons, (2) each group found explanatory values in different explanations that went beyond the usage we designed them for, and (3) each group had different requirements of what counts as humanlike explanations. Using our findings, we discuss potential negative consequences such as harmful manipulation of user trust and propose design interventions to mitigate them. By bringing conscious awareness to how and why AI backgrounds shape perceptions of potential creators and consumers in XAI, our work takes a formative step in advancing a pluralistic Human-centered Explainable AI discourse.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا