ﻻ يوجد ملخص باللغة العربية
Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework. The right explanation provides the rationale behind an AI agents decision-making. However, to maintain the human teammates cognitive demand to comprehend the provided explanations, prior works have focused on providing explanations in a specific order or intertwining the explanation generation with plan execution. Moreover, these approaches do not consider the degree of details required to share throughout the provided explanations. In this work, we argue that the agent-generated explanations, especially the complex ones, should be abstracted to be aligned with the level of details the human teammate desires to maintain the recipients cognitive load. Therefore, learning a hierarchical explanations model is a challenging task. Moreover, the agent needs to follow a consistent high-level policy to transfer the learned teammate preferences to a new scenario while lower-level detailed plans are different. Our evaluation confirmed the process of understanding an explanation, especially a complex and detailed explanation, is hierarchical. The human preference that reflected this aspect corresponded exactly to creating and employing abstraction for knowledge assimilation hidden deeper in our cognitive process. We showed that hierarchical explanations achieved better task performance and behavior interpretability while reduced cognitive load. These results shed light on designing explainable agents utilizing reinforcement learning and planning across various domains.
Generating explanation to explain its behavior is an essential capability for a robotic teammate. Explanations help human partners better understand the situation and maintain trust of their teammates. Prior work on robot generating explanations focu
As AI becomes an integral part of our lives, the development of explainable AI, embodied in the decision-making process of an AI or robotic agent, becomes imperative. For a robotic teammate, the ability to generate explanations to justify its behavio
Prior work on generating explanations in a planning and decision-making context has focused on providing the rationale behind an AI agents decision making. While these methods provide the right explanations from the explainers perspective, they fail
Human collaborators can effectively communicate with their partners to finish a common task by inferring each others mental states (e.g., goals, beliefs, and desires). Such mind-aware communication minimizes the discrepancy among collaborators mental
Designing robotic tasks for co-manipulation necessitates to exploit not only proprioceptive but also exteroceptive information for improved safety and autonomy. Following such instinct, this research proposes to formulate intuitive robotic tasks foll