ترغب بنشر مسار تعليمي؟ اضغط هنا

A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play

253   0   0.0 ( 0 )
 نشر من قبل Shray Bansal
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider shared workspace scenarios with humans and robots acting to achieve independent goals, termed as parallel play. We model these as general-sum games and construct a framework that utilizes the Nash equilibrium solution concept to consider the interactive effect of both agents while planning. We find multiple Pareto-optimal equilibria in these tasks. We hypothesize that people act by choosing an equilibrium based on social norms and their personalities. To enable coordination, we infer the equilibrium online using a probabilistic model that includes these two factors and use it to select the robots action. We apply our approach to a close-proximity pick-and-place task involving a robot and a simulated human with three potential behaviors - defensive, selfish, and norm-following. We showed that using a Bayesian approach to infer the equilibrium enables the robot to complete the task with less than half the number of collisions while also reducing the task execution time as compared to the best baseline. We also performed a study with human participants interacting either with other humans or with different robot agents and observed that our proposed approach performs similar to human-human parallel play interactions. The code is available at https://github.com/shray/bayes-nash



قيم البحث

اقرأ أيضاً

Prior AI breakthroughs in complex games have focused on either the purely adversarial or purely cooperative settings. In contrast, Diplomacy is a game of shifting alliances that involves both cooperation and competition. For this reason, Diplomacy ha s proven to be a formidable research challenge. In this paper we describe an agent for the no-press variant of Diplomacy that combines supervised learning on human data with one-step lookahead search via regret minimization. Regret minimization techniques have been behind previous AI successes in adversarial games, most notably poker, but have not previously been shown to be successful in large-scale games involving cooperation. We show that our agent greatly exceeds the performance of past no-press Diplomacy bots, is unexploitable by expert humans, and ranks in the top 2% of human players when playing anonymous games on a popular Diplomacy website.
Human collaborators can effectively communicate with their partners to finish a common task by inferring each others mental states (e.g., goals, beliefs, and desires). Such mind-aware communication minimizes the discrepancy among collaborators mental states, and is crucial to the success in human ad-hoc teaming. We believe that robots collaborating with human users should demonstrate similar pedagogic behavior. Thus, in this paper, we propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations, where the robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications based on its online Bayesian inference of the users mental state. To evaluate our framework, we conduct a user study on a real-time human-robot cooking task. Experimental results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot. Code and video demos are available on our project website: https://xfgao.github.io/xCookingWeb/.
128 - Weizhe Chen , Zihan Zhou , Yi Wu 2021
One practical requirement in solving dynamic games is to ensure that the players play well from any decision point onward. To satisfy this requirement, existing efforts focus on equilibrium refinement, but the scalability and applicability of existin g techniques are limited. In this paper, we propose Temporal-Induced Self-Play (TISP), a novel reinforcement learning-based framework to find strategies with decent performances from any decision point onward. TISP uses belief-space representation, backward induction, policy learning, and non-parametric approximation. Building upon TISP, we design a policy-gradient-based algorithm TISP-PG. We prove that TISP-based algorithms can find approximate Perfect Bayesian Equilibrium in zero-sum one-sided stochastic Bayesian games with finite horizon. We test TISP-based algorithms in various games, including finitely repeated security games and a grid-world game. The results show that TISP-PG is more scalable than existing mathematical programming-based methods and significantly outperforms other learning-based methods.
For a humanoid robot to make eye contact to initiate communication with a human, it is necessary to estimate the humans head position.However, eye contact becomes difficult due to the mechanical delay of the robot while the subject with whom the robo t is interacting with is moving. Owing to these issues, it is important to perform head-position prediction to mitigate the effect of the delay in the robots motion. Based on the fact that humans turn their heads before changing direction while walking, we hypothesized that the accuracy of three-dimensional(3D) head-position prediction from the first-person view can be improved by considering the head pose into account.We compared our method with the conventional Kalman filter-based method, and found our method to be more accurate. The experimental results show that considering the head pose helps improve the accuracy of 3D head-position prediction.
127 - Yaohui Guo , X. Jessie Yang 2020
Trust in automation, or more recently trust in autonomy, has received extensive research attention in the past two decades. The majority of prior literature adopted a snapshot view of trust and typically evaluated trust through questionnaires adminis tered at the end of an experiment. This snapshot view, however, does not acknowledge that trust is a time-variant variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model based on Beta distribution and learn its parameters using Bayesian inference. Our proposed model adheres to three major properties of trust dynamics reported in prior empirical studies. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a Root Mean Square Error (RMSE) of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinctive types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا