ترغب بنشر مسار تعليمي؟ اضغط هنا

Crowdsourcing Evaluation of Saliency-based XAI Methods

124   0   0.0 ( 0 )
 نشر من قبل Xiaotian Lu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Understanding the reasons behind the predictions made by deep neural networks is critical for gaining human trust in many important applications, which is reflected in the increasing demand for explainability in AI (XAI) in recent years. Saliency-based feature attribution methods, which highlight important parts of images that contribute to decisions by classifiers, are often used as XAI methods, especially in the field of computer vision. In order to compare various saliency-based XAI methods quantitatively, several approaches for automated evaluation schemes have been proposed; however, there is no guarantee that such automated evaluation metrics correctly evaluate explainability, and a high rating by an automated evaluation scheme does not necessarily mean a high explainability for humans. In this study, instead of the automated evaluation, we propose a new human-based evaluation scheme using crowdsourcing to evaluate XAI methods. Our method is inspired by a human computation game, Peek-a-boom, and can efficiently compare different XAI methods by exploiting the power of crowds. We evaluate the saliency maps of various XAI methods on two datasets with automated and crowd-based evaluation schemes. Our experiments show that the result of our crowd-based evaluation scheme is different from those of automated evaluation schemes. In addition, we regard the crowd-based evaluation results as ground truths and provide a quantitative performance measure to compare different automated evaluation schemes. We also discuss the impact of crowd workers on the results and show that the varying ability of crowd workers does not significantly impact the results.

قيم البحث

اقرأ أيضاً

Appearance-based gaze estimation methods that only require an off-the-shelf camera have significantly improved but they are still not yet widely used in the human-computer interaction (HCI) community. This is partly because it remains unclear how the y perform compared to model-based approaches as well as dominant, special-purpose eye tracking equipment. To address this limitation, we evaluate the performance of state-of-the-art appearance-based gaze estimation for interaction scenarios with and without personal calibration, indoors and outdoors, for different sensing distances, as well as for users with and without glasses. We discuss the obtained findings and their implications for the most important gaze-based applications, namely explicit eye input, attentive user interfaces, gaze-based user modelling, and passive eye monitoring. To democratise the use of appearance-based gaze estimation and interaction in HCI, we finally present OpenGaze (www.opengaze.org), the first software toolkit for appearance-based gaze estimation and interaction.
The head-up display (HUD) is an emerging device which can project information on a transparent screen. The HUD has been used in airplanes and vehicles, and it is usually placed in front of the operators view. In the case of the vehicle, the driver ca n see not only various information on the HUD but also the backgrounds (driving environment) through the HUD. However, the projected information on the HUD may interfere with the colors in the background because the HUD is transparent. For example, a red message on the HUD will be less noticeable when there is an overlap between it and the red brake light from the front vehicle. As the first step to solve this issue, how to evaluate the mutual interference between the information on the HUD and backgrounds is important. Therefore, this paper proposes a method to evaluate the mutual interference based on saliency. It can be evaluated by comparing the HUD part cut from a saliency map of a measured image with the HUD image.
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process. With this method we have assembled SciQ, a dataset of 13.7K multiple choice science exam questions (Dataset available at http://allenai.org/data.html). We demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans cannot distinguish the crowdsourced questions from original questions. When using SciQ as additional training data to existing questions, we observe accuracy improvements on real science exams.
The study reports the performance of Parkinsons disease (PD) patients to operate Motor-Imagery based Brain-Computer Interface (MI-BCI) and compares three selected pre-processing and classification approaches. The experiment was conducted on 7 PD pati ents who performed a total of 14 MI-BCI sessions targeting lower extremities. EEG was recorded during the initial calibration phase of each session, and the specific BCI models were produced by using Spectrally weighted Common Spatial Patterns (SpecCSP), Source Power Comodulation (SPoC) and Filter-Bank Common Spatial Patterns (FBCSP) methods. The results showed that FBCSP outperformed SPoC in terms of accuracy, and both SPoC and SpecCSP in terms of the false-positive ratio. The study also demonstrates that PD patients were capable of operating MI-BCI, although with lower accuracy.
Rehabilitation assessment is critical to determine an adequate intervention for a patient. However, the current practices of assessment mainly rely on therapists experience, and assessment is infrequently executed due to the limited availability of a therapist. In this paper, we identified the needs of therapists to assess patients functional abilities (e.g. alternative perspective on assessment with quantitative information on patients exercise motions). As a result, we developed an intelligent decision support system that can identify salient features of assessment using reinforcement learning to assess the quality of motion and summarize patient specific analysis. We evaluated this system with seven therapists using the dataset from 15 patient performing three exercises. The evaluation demonstrates that our system is preferred over a traditional system without analysis while presenting more useful information and significantly increasing the agreement over therapists evaluation from 0.6600 to 0.7108 F1-scores ($p <0.05$). We discuss the importance of presenting contextually relevant and salient information and adaptation to develop a human and machine collaborative decision making system.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا