ﻻ يوجد ملخص باللغة العربية
Human collaborators can effectively communicate with their partners to finish a common task by inferring each others mental states (e.g., goals, beliefs, and desires). Such mind-aware communication minimizes the discrepancy among collaborators mental states, and is crucial to the success in human ad-hoc teaming. We believe that robots collaborating with human users should demonstrate similar pedagogic behavior. Thus, in this paper, we propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations, where the robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications based on its online Bayesian inference of the users mental state. To evaluate our framework, we conduct a user study on a real-time human-robot cooking task. Experimental results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot. Code and video demos are available on our project website: https://xfgao.github.io/xCookingWeb/.
Generating explanation to explain its behavior is an essential capability for a robotic teammate. Explanations help human partners better understand the situation and maintain trust of their teammates. Prior work on robot generating explanations focu
As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavio
Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework. The right explanation provides the rationale behind an AI agents decision-making. However, to maintain the human teammates cognitive demand
This paper presents a user-centered physical interface for collaborative mobile manipulators in industrial manufacturing and logistics applications. The proposed work builds on our earlier MOCA-MAN interface, through which a mobile manipulator could
Prior work on generating explanations in a planning and decision-making context has focused on providing the rationale behind an AI agents decision making. While these methods provide the right explanations from the explainers perspective, they fail