ترغب بنشر مسار تعليمي؟ اضغط هنا

Exploring deterministic frequency deviations with explainable AI

86   0   0.0 ( 0 )
 نشر من قبل Johannes Kruse
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Deterministic frequency deviations (DFDs) critically affect power grid frequency quality and power system stability. A better understanding of these events is urgently needed as frequency deviations have been growing in the European grid in recent years. DFDs are partially explained by the rapid adjustment of power generation following the intervals of electricity trading, but this intuitive picture fails especially before and around noonday. In this article, we provide a detailed analysis of DFDs and their relation to external features using methods from explainable Artificial Intelligence. We establish a machine learning model that well describes the daily cycle of DFDs and elucidate key interdependencies using SHapley Additive exPlanations (SHAP). Thereby, we identify solar ramps as critical to explain patterns in the Rate of Change of Frequency (RoCoF).



قيم البحث

اقرأ أيضاً

Stable operation of the electrical power system requires the power grid frequency to stay within strict operational limits. With millions of consumers and thousands of generators connected to a power grid, detailed human-build models can no longer ca pture the full dynamics of this complex system. Modern machine learning algorithms provide a powerful alternative for system modelling and prediction, but the intrinsic black-box character of many models impedes scientific insights and poses severe security risks. Here, we show how eXplainable AI (XAI) alleviates these problems by revealing critical dependencies and influences on the power grid frequency. We accurately predict frequency stability indicators (such as RoCoF and Nadir) for three major European synchronous areas and identify key features that determine the power grid stability. Load ramps, specific generation ramps but also prices and forecast errors are central to understand and stabilize the power grid.
Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces. In this chapter, we introduce the reader to the concept of knowledge graph embeddings by expla ining what they are, how they can be generated and how they can be evaluated. We summarize the state-of-the-art in this field by describing the approaches that have been introduced to represent knowledge in the vector space. In relation to knowledge representation, we consider the problem of explainability, and discuss models and methods for explaining predictions obtained via knowledge graph embeddings.
AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by inability to fully trust an AI system that it will not harm a human. Besides the concerns for fairness, privacy, transpare ncy, and explainability are key to developing trusts in AI systems. As stated in describing trustworthy AI Trust comes through understanding. How AI-led decisions are made and what determining factors were included are crucial to understand. The subarea of explaining AI systems has come to be known as XAI. Multiple aspects of an AI system can be explained; these include biases that the data might have, lack of data points in a particular region of the example space, fairness of gathering the data, feature importances, etc. However, besides these, it is critical to have human-centered explanations that are directly related to decision-making similar to how a domain expert makes decisions based on domain knowledge, that also include well-established, peer-validated explicit guidelines. To understand and validate an AI systems outcomes (such as classification, recommendations, predictions), that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use.
The overarching goal of Explainable AI is to develop systems that not only exhibit intelligent behaviours, but also are able to explain their rationale and reveal insights. In explainable machine learning, methods that produce a high level of predict ion accuracy as well as transparent explanations are valuable. In this work, we present an explainable classification method. Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming. Our approach achieves a level of learning performance comparable to that of traditional classifiers such as random forests, support vector machines and neural networks. It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the art Shapley Value based method. Our algorithms perform well on a range of synthetic and non-synthetic data sets.
Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and reconnaissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distributed amon g multiple partners. Realising this concept requires advancement in both artificial intelligence (AI) for improved distributed data analytics and intelligence augmentation (IA) for improved human-machine cognition. The contribution of this paper is threefold: (1) we map the coalition situational understanding (CSU) concept to MDO ISR requirements, paying particular attention to the need for assured and explainable AI to allow robust human-machine decision-making where assets are distributed among multiple partners; (2) we present illustrative vignettes for AI and IA in MDO ISR, including human-machine teaming, dense urban terrain analysis, and enhanced asset interoperability; (3) we appraise the state-of-the-art in explainable AI in relation to the vignettes with a focus on human-machine collaboration to achieve more rapid and agile coalition decision-making. The union of these three elements is intended to show the potential value of a CSU approach in the context of MDO ISR, grounded in three distinct use cases, highlighting how the need for explainability in the multi-partner coalition setting is key.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا