Do you want to publish a course? Click here

When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions

192   0   0.0 ( 0 )
 Added by Wenxuan Mou
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Trust is a critical issue in Human Robot Interactions as it is the core of human desire to accept and use a non human agent. Theory of Mind has been defined as the ability to understand the beliefs and intentions of others that may differ from ones own. Evidences in psychology and HRI suggest that trust and Theory of Mind are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entitys actions, beliefs and intentions. However, very few works take Theory of Mind of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the Theory of Mind abilities of a robot could affect humans trust towards the robot. To this end, participants played a Price Game with a humanoid robot that was presented having either low level Theory of Mind or high level Theory of Mind. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high level of Theory of Mind abilities were trusted more than the robots presented with low level Theory of Mind skills.



rate research

Read More

75 - Yaohui Guo , Cong Shi , 2021
To facilitate effective human-robot interaction (HRI), trust-aware HRI has been proposed, wherein the robotic agent explicitly considers the humans trust during its planning and decision making. The success of trust-aware HRI depends on the specification of a trust dynamics model and a trust-behavior model. In this study, we proposed one novel trust-behavior model, namely the reverse psychology model, and compared it against the commonly used disuse model. We examined how the two models affect the robots optimal policy and the human-robot team performance. Results indicate that the robot will deliberately manipulate the humans trust under the reverse psychology model. To correct this manipulative behavior, we proposed a trust-seeking reward function that facilitates trust establishment without significantly sacrificing the team performance.
Symbolic motion planning for robots is the process of specifying and planning robot tasks in a discrete space, then carrying them out in a continuous space in a manner that preserves the discrete-level task specifications. Despite progress in symbolic motion planning, many challenges remain, including addressing scalability for multi-robot systems and improving solutions by incorporating human intelligence. In this paper, distributed symbolic motion planning for multi-robot systems is developed to address scalability. More specifically, compositional reasoning approaches are developed to decompose the global planning problem, and atomic propositions for observation, communication, and control are proposed to address inter-robot collision avoidance. To improve solution quality and adaptability, a dynamic, quantitative, and probabilistic human-to-robot trust model is developed to aid this decomposition. Furthermore, a trust-based real-time switching framework is proposed to switch between autonomous and manual motion planning for tradeoffs between task safety and efficiency. Deadlock- and livelock-free algorithms are designed to guarantee reachability of goals with a human-in-the-loop. A set of non-trivial multi-robot simulations with direct human input and trust evaluation are provided demonstrating the successful implementation of the trust-based multi-robot symbolic motion planning methods.
We introduce a novel capabilities-based bi-directional multi-task trust model that can be used for trust prediction from either a human or a robotic trustor agent. Tasks are represented in terms of their capability requirements, while trustee agents are characterized by their individual capabilities. Trustee agents capabilities are not deterministic; they are represented by belief distributions. For each task to be executed, a higher level of trust is assigned to trustee agents who have demonstrated that their capabilities exceed the tasks requirements. We report results of an online experiment with 284 participants, revealing that our model outperforms existing models for multi-task trust prediction from a human trustor. We also present simulations of the model for determining trust from a robotic trustor. Our model is useful for control authority allocation applications that involve human-robot teams.
This paper presents a human-robot trust integrated task allocation and motion planning framework for multi-robot systems (MRS) in performing a set of tasks concurrently. A set of task specifications in parallel are conjuncted with MRS to synthesize a task allocation automaton. Each transition of the task allocation automaton is associated with the total trust value of human in corresponding robots. Here, the human-robot trust model is constructed with a dynamic Bayesian network (DBN) by considering individual robot performance, safety coefficient, human cognitive workload and overall evaluation of task allocation. Hence, a task allocation path with maximum encoded human-robot trust can be searched based on the current trust value of each robot in the task allocation automaton. Symbolic motion planning (SMP) is implemented for each robot after they obtain the sequence of actions. The task allocation path can be intermittently updated with this DBN based trust model. The overall strategy is demonstrated by a simulation with 5 robots and 3 parallel subtask automata.
When cooperating with a human, a robot should not only care about its environment and task but also develop an understanding of the partners reasoning. To support its human partner in complex tasks, the robot can share information that it knows. However simply communicating everything will annoy and distract humans since they might already be aware of and not all information is relevant in the current situation. The questions when and what type of information the human needs, are addressed through the concept of Theory of Mind based Communication which selects information sharing actions based on evaluation of relevance and an estimation of human beliefs. We integrate this into a communication assistant to support humans in a cooperative setting and evaluate performance benefits. We designed a human robot Sushi making task that is challenging for the human and generates different situations where humans are unaware and communication could be beneficial. We evaluate the influence of the human centric communication concept on performance with a user study. Compared to the condition without information exchange, assisted participants can recover from unawareness much earlier. The approach respects the costs of communication and balances interruptions better than other approaches. By providing information adapted to specific situations, the robot does not instruct but enable the human to make good decision.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا