ترغب بنشر مسار تعليمي؟ اضغط هنا

Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems

76   0   0.0 ( 0 )
 نشر من قبل Zana Bu\\c{c}inca
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Explainable artificially intelligent (XAI) systems form part of sociotechnical systems, e.g., human+AI teams tasked with making decisions. Yet, current XAI systems are rarely evaluated by measuring the performance of human+AI teams on actual decision-making tasks. We conducted two online experiments and one in-person think-aloud study to evaluate two currently common techniques for evaluating XAI systems: (1) using proxy, artificial tasks such as how well humans predict the AIs decision from the given explanations, and (2) using subjective measures of trust and preference as predictors of actual performance. The results of our experiments demonstrate that evaluations with proxy tasks did not predict the results of the evaluations with the actual decision-making tasks. Further, the subjective measures on evaluations with actual decision-making tasks did not predict the objective performance on those same tasks. Our results suggest that by employing misleading evaluation methods, our field may be inadvertently slowing its progress toward developing human+AI teams that can reliably perform better than humans or AIs alone.



قيم البحث

اقرأ أيضاً

Explainable AI has attracted much research attention in recent years with feature attribution algorithms, which compute feature importance in predictions, becoming increasingly popular. However, there is little analysis of the validity of these algor ithms as there is no ground truth in the existing datasets to validate their correctness. In this work, we develop a method to quantitatively evaluate the correctness of XAI algorithms by creating datasets with known explanation ground truth. To this end, we focus on the binary classification problems. String datasets are constructed using formal language derived from a grammar. A string is positive if and only if a certain property is fulfilled. Symbols serving as explanation ground truth in a positive string are part of an explanation if and only if they contributes to fulfilling the property. Two popular feature attribution explainers, Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), are used in our experiments.We show that: (1) classification accuracy is positively correlated with explanation accuracy; (2) SHAP provides more accurate explanations than LIME; (3) explanation accuracy is negatively correlated with dataset complexity.
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on ma chine learning methods can become crucial. Inductive Logic Programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the process of data. Learning from Interpretation Transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains.
85 - John Foley 2018
The field of information retrieval often works with limited and noisy data in an attempt to classify documents into subjective categories, e.g., relevance, sentiment and controversy. We typically quantify a notion of agreement to understand the diffi culty of the labeling task, but when we present final results, we do so using measures that are unaware of agreement or the inherent subjectivity of the task. We propose using user simulation to understand the effect size of this noisy agreement data. By simulating truth and predictions, we can understand the maximum scores a dataset can support: for if a classifier is doing better than a reasonable model of a human, we cannot conclude that it is actually better, but that it may be learning noise present in the dataset. We present a brief case study on controversy detection that concludes that a commonly-used dataset has been exhausted: in order to advance the state-of-the-art, more data must be gathered at the current level of label agreement in order to distinguish between techniques with confidence.
Knowledge graph embeddings are now a widely adopted approach to knowledge representation in which entities and relationships are embedded in vector spaces. In this chapter, we introduce the reader to the concept of knowledge graph embeddings by expla ining what they are, how they can be generated and how they can be evaluated. We summarize the state-of-the-art in this field by describing the approaches that have been introduced to represent knowledge in the vector space. In relation to knowledge representation, we consider the problem of explainability, and discuss models and methods for explaining predictions obtained via knowledge graph embeddings.
Planning for death is not a process in which everyone participates. Yet a lack of planning can have vast impacts on a patients well-being, the well-being of her family, and the medical community as a whole. Advance Care Planning (ACP) has been a fiel d in the United States for a half-century. Many modern techniques prompting patients to think about end of life (EOL) involve short surveys or questionnaires. Different surveys are targeted to different populations (based off of likely disease progression or cultural factors, for instance), are designed with different intentions, and are administered in different ways. There has been recent work using technology to increase the number of people using advance care planning tools. However, modern techniques from machine learning and artificial intelligence could be employed to make additional changes to the current ACP process. In this paper we will discuss some possible ways in which these tools could be applied. We will discuss possible implications of these applications through vignettes of patient scenarios. We hope that this paper will encourage thought about appropriate applications of artificial intelligence in ACP as well as implementation of AI in order to ensure intentions are honored.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا