ترغب بنشر مسار تعليمي؟ اضغط هنا

Design not Lost in Translation: A Case Study of an Intimate-Space Socially Assistive Robot for Emotion Regulation

65   0   0.0 ( 0 )
 نشر من قبل Katherine Isbister
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a Research-through-Design case study of the design and development of an intimate-space tangible device perhaps best understood as a socially assistive robot, aimed at scaffolding childrens efforts at emotional regulation. This case study covers the initial research device development, as well as knowledge transfer to a product development company towards translating the research into a workable commercial product that could also serve as a robust research product for field trials. Key contributions to the literature include: 1. sharing of lessons learned from the knowledge transfer process that can be useful to others interested in developing robust products, whether commercial or research, that preserve design values, while allowing for large scale deployment and research; 2. articulation of a design space in HCI/HRI--Human Robot Interaction--of intimate space socially assistive robots, with the current artifact as a central exemplar, contextualized alongside other related HRI artifacts.



قيم البحث

اقرأ أيضاً

This work describes a new human-in-the-loop (HitL) assistive grasping system for individuals with varying levels of physical capabilities. We investigated the feasibility of using four potential input devices with our assistive grasping system interf ace, using able-bodied individuals to define a set of quantitative metrics that could be used to assess an assistive grasping system. We then took these measurements and created a generalized benchmark for evaluating the effectiveness of any arbitrary input device into a HitL grasping system. The four input devices were a mouse, a speech recognition device, an assistive switch, and a novel sEMG device developed by our group that was connected either to the forearm or behind the ear of the subject. These preliminary results provide insight into how different interface devices perform for generalized assistive grasping tasks and also highlight the potential of sEMG based control for severely disabled individuals.
The research of a socially assistive robot has a potential to augment and assist physical therapy sessions for patients with neurological and musculoskeletal problems (e.g. stroke). During a physical therapy session, generating personalized feedback is critical to improve patients engagement. However, prior work on socially assistive robotics for physical therapy has mainly utilized pre-defined corrective feedback even if patients have various physical and functional abilities. This paper presents an interactive approach of a socially assistive robot that can dynamically select kinematic features of assessment on individual patients exercises to predict the quality of motion and provide patient-specific corrective feedback for personalized interaction of a robot exercise coach.
Socially Assistive Robots (SARs) offer great promise for improving outcomes in paediatric rehabilitation. However, the design of software and interactive capabilities for SARs must be carefully considered in the context of their intended clinical use . While previous work has explored specific roles and functionalities to support paediatric rehabilitation, few have considered the design of such capabilities in the context of ongoing clinical deployment. In this paper we present a two-phase In-situ design process for SARs in health care, emphasising stakeholder engagement and on-site development. We explore this in the context of developing the humanoid social robot NAO as a socially assistive rehabilitation aid for children with cerebral palsy. We present and evaluate our design process, outcomes achieved, and preliminary results from ongoing clinical testing with 9 patients and 5 therapists over 14 sessions. We argue that our in-situ Design methodology has been central to the rapid and successful deployment of our system.
127 - Yaohui Guo , X. Jessie Yang 2020
Trust in automation, or more recently trust in autonomy, has received extensive research attention in the past two decades. The majority of prior literature adopted a snapshot view of trust and typically evaluated trust through questionnaires adminis tered at the end of an experiment. This snapshot view, however, does not acknowledge that trust is a time-variant variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model based on Beta distribution and learn its parameters using Bayesian inference. Our proposed model adheres to three major properties of trust dynamics reported in prior empirical studies. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a Root Mean Square Error (RMSE) of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinctive types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.
Robots may soon play a role in higher education by augmenting learning environments and managing interactions between instructors and learners. Little, however, is known about how the presence of robots in the learning environment will influence acad emic integrity. This study therefore investigates if and how college students cheat while engaged in a collaborative sorting task with a robot. We employed a 2x2 factorial design to examine the effects of cheating exposure (exposure to cheating or no exposure) and task clarity (clear or vague rules) on college student cheating behaviors while interacting with a robot. Our study finds that prior exposure to cheating on the task significantly increases the likelihood of cheating. Yet, the tendency to cheat was not impacted by the clarity of the task rules. These results suggest that normative behavior by classmates may strongly influence the decision to cheat while engaged in an instructional experience with a robot.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا