ﻻ يوجد ملخص باللغة العربية
Designing human-centered AI-driven applications require deep understandings of how people develop mental models of AI. Currently, we have little knowledge of this process and limited tools to study it. This paper presents the position that AI-based games, particularly the player-AI interaction component, offer an ideal domain to study the process in which mental models evolve. We present a case study to illustrate the benefits of our approach for explainable AI.
The advent of artificial intelligence (AI) and machine learning (ML) bring human-AI interaction to the forefront of HCI research. This paper argues that games are an ideal domain for studying and experimenting with how humans interact with AI. Throug
Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While opening the opaque box is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, w
Reinforcement Learning AI commonly uses reward/penalty signals that are objective and explicit in an environment -- e.g. game score, completion time, etc. -- in order to learn the optimal strategy for task performance. However, Human-AI interaction f
Effective human-vehicle collaboration requires an appropriate un-derstanding of vehicle behavior for safety and trust. Improvingon our prior work by adding a future prediction module, we in-troduce our framework, calledAutoPreview, to enable humans t
Human and AI are increasingly interacting and collaborating to accomplish various complex tasks in the context of diverse application domains (e.g., healthcare, transportation, and creative design). Two dynamic, learning entities (AI and human) have