ﻻ يوجد ملخص باللغة العربية
With the growing capabilities of intelligent systems, the integration of artificial intelligence (AI) and robots in everyday life is increasing. However, when interacting in such complex human environments, the failure of intelligent systems, such as robots, can be inevitable, requiring recovery assistance from users. In this work, we develop automated, natural language explanations for failures encountered during an AI agents plan execution. These explanations are developed with a focus of helping non-expert users understand different point of failures to better provide recovery assistance. Specifically, we introduce a context-based information type for explanations that can both help non-expert users understand the underlying cause of a system failure, and select proper failure recoveries. Additionally, we extend an existing sequence-to-sequence methodology to automatically generate our context-based explanations. By doing so, we are able develop a model that can generalize context-based explanations over both different failure types and failure scenarios.
With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of ex
Prior work on generating explanations in a planning and decision-making context has focused on providing the rationale behind an AI agents decision making. While these methods provide the right explanations from the explainers perspective, they fail
Due to the growing awareness of driving safety and the development of sophisticated technologies, advanced driving assistance system (ADAS) has been equipped in more and more vehicles with higher accuracy and lower price. The latest progress in this
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While opening the opaque box is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, w
AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by inability to fully trust an AI system that it will not harm a human. Besides the concerns for fairness, privacy, transpare