ترغب بنشر مسار تعليمي؟ اضغط هنا

Directions for Explainable Knowledge-Enabled Systems

147   0   0.0 ( 0 )
 نشر من قبل Shruthi Chari
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for todays artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users and situational needs.



قيم البحث

اقرأ أيضاً

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artifici al intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
This paper presents an eXplainable Fault Detection and Diagnosis System (XFDDS) for incipient faults in PV panels. The XFDDS is a hybrid approach that combines the model-based and data-driven framework. Model-based FDD for PV panels lacks high fideli ty models at low irradiance conditions for detecting incipient faults. To overcome this, a novel irradiance based three diode model (IB3DM) is proposed. It is a nine parameter model that provides higher accuracy even at low irradiance conditions, an important aspect for detecting incipient faults from noise. To exploit PV data, extreme gradient boosting (XGBoost) is used due to its ability to detecting incipient faults. Lack of explainability, feature variability for sample instances, and false alarms are challenges with data-driven FDD methods. These shortcomings are overcome by hybridization of XGBoost and IB3DM, and using eXplainable Artificial Intelligence (XAI) techniques. To combine the XGBoost and IB3DM, a fault-signature metric is proposed that helps reducing false alarms and also trigger an explanation on detecting incipient faults. To provide explainability, an eXplainable Artificial Intelligence (XAI) application is developed. It uses the local interpretable model-agnostic explanations (LIME) framework and provides explanations on classifier outputs for data instances. These explanations help field engineers/technicians for performing troubleshooting and maintenance operations. The proposed XFDDS is illustrated using experiments on different PV technologies and our results demonstrate the perceived benefits.
Explainability is essential for autonomous vehicles and other robotics systems interacting with humans and other objects during operation. Humans need to understand and anticipate the actions taken by the machines for trustful and safe cooperation. I n this work, we aim to enable the explainability of an autonomous driving system at the design stage by incorporating expert domain knowledge into the model. We propose Grounded Relational Inference (GRI). It models an interactive systems underlying dynamics by inferring an interaction graph representing the agents relations. We ensure an interpretable interaction graph by grounding the relational latent space into semantic behaviors defined with expert domain knowledge. We demonstrate that it can model interactive traffic scenarios under both simulation and real-world settings, and generate interpretable graphs explaining the vehicles behavior by their interactions.
AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by inability to fully trust an AI system that it will not harm a human. Besides the concerns for fairness, privacy, transpare ncy, and explainability are key to developing trusts in AI systems. As stated in describing trustworthy AI Trust comes through understanding. How AI-led decisions are made and what determining factors were included are crucial to understand. The subarea of explaining AI systems has come to be known as XAI. Multiple aspects of an AI system can be explained; these include biases that the data might have, lack of data points in a particular region of the example space, fairness of gathering the data, feature importances, etc. However, besides these, it is critical to have human-centered explanations that are directly related to decision-making similar to how a domain expert makes decisions based on domain knowledge, that also include well-established, peer-validated explicit guidelines. To understand and validate an AI systems outcomes (such as classification, recommendations, predictions), that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use.
Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces. In this chapter, we introduce the reader to the concept of knowledge graph embeddings by expla ining what they are, how they can be generated and how they can be evaluated. We summarize the state-of-the-art in this field by describing the approaches that have been introduced to represent knowledge in the vector space. In relation to knowledge representation, we consider the problem of explainability, and discuss models and methods for explaining predictions obtained via knowledge graph embeddings.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا