ترغب بنشر مسار تعليمي؟ اضغط هنا

Teaching Turn-Taking Skills to Children with Autism using a Parrot-Like Robot

155   0   0.0 ( 0 )
 نشر من قبل Pegah Soleiman
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Pegah Soleiman




اسأل ChatGPT حول البحث

Robot Assisted Therapy is a new paradigm in many therapies such as the therapy of children with autism spectrum disorder. In this paper we present the use of a parrot-like robot as an assistive tool in turn taking therapy. The therapy is designed in the form of a card game between a child with autism and a therapist or the robot. The intervention was implemented in a single subject study format and the effect sizes for different turn taking variables are calculated. The results show that the child robot interaction had larger effect size than the child trainer effect size in most of the turn taking variables. Furthermore the therapist point of view on the proposed Robot Assisted Therapy is evaluated using a questionnaire. The therapist believes that the robot is appealing to children which may ease the therapy process. The therapist suggested to add other functionalities and games to let children with autism to learn more turn taking tasks and better generalize the learned tasks



قيم البحث

اقرأ أيضاً

Autism spectrum disorder (ASD) is a developmental disorder that influences the communication and social behavior of a person in a way that those in the spectrum have difficulty in perceiving other peoples facial expressions, as well as presenting and communicating emotions and affect via their own faces and bodies. Some efforts have been made to predict and improve children with ASDs affect states in play therapy, a common method to improve childrens social skills via play and games. However, many previous works only used pre-trained models on benchmark emotion datasets and failed to consider the distinction in emotion between typically developing children and children with autism. In this paper, we present an open-source two-stage multi-modal approach leveraging acoustic and visual cues to predict three main affect states of children with ASDs affect states (positive, negative, and neutral) in real-world play therapy scenarios, and achieved an overall accuracy of 72:40%. This work presents a novel way to combine human expertise and machine intelligence for ASD affect recognition by proposing a two-stage schema.
Recent work has shown results on learning navigation policies for idealized cylinder agents in simulation and transferring them to real wheeled robots. Deploying such navigation policies on legged robots can be challenging due to their complex dynami cs, and the large dynamical difference between cylinder agents and legged systems. In this work, we learn hierarchical navigation policies that account for the low-level dynamics of legged robots, such as maximum speed, slipping, contacts, and learn to successfully navigate cluttered indoor environments. To enable transfer of policies learned in simulation to new legged robots and hardware, we learn dynamics-aware navigation policies across multiple robots with robot-specific embeddings. The learned embedding is optimized on new robots, while the rest of the policy is kept fixed, allowing for quick adaptation. We train our policies across three legged robots in simulation - 2 quadrupeds (A1, AlienGo) and a hexapod (Daisy). At test time, we study the performance of our learned policy on two new legged robots in simulation (Laikago, 4-legged Daisy), and one real-world quadrupedal robot (A1). Our experiments show that our learned policy can sample-efficiently generalize to previously unseen robots, and enable sim-to-real transfer of navigation policies for legged robots.
Robot task execution when situated in real-world environments is fragile. As such, robot architectures must rely on robust error recovery, adding non-trivial complexity to highly-complex robot systems. To handle this complexity in development, we int roduce Recovery-Driven Development (RDD), an iterative task scripting process that facilitates rapid task and recovery development by leveraging hierarchical specification, separation of nominal task and recovery development, and situated testing. We validate our approach with our challenge-winning mobile manipulator software architecture developed using RDD for the FetchIt! Challenge at the IEEE 2019 International Conference on Robotics and Automation. We attribute the success of our system to the level of robustness achieved using RDD, and conclude with lessons learned for developing such systems.
With the fast development of network information technology, more and more people are immersed in the virtual community environment brought by the network, ignoring the social interaction in real life. The consequent urban autism problem has become m ore and more serious. Promoting offline communication between people and eliminating loneliness through emotional communication between pet robots and breeders to solve this problem, and has developed a design called Tom. Tom is a smart pet robot with a pet robot-based social mechanism Called Tom-Talker. The main contribution of this paper is to propose a social mechanism called Tom-Talker that encourages users to socialize offline. And Tom-Talker also has a corresponding reward mechanism and a friend recommendation algorithm. It also proposes a pet robot named Tom with an emotional interaction algorithm to recognize users emotions, simulate animal emotions and communicate emotionally with use s. This paper designs experiments and analyzes the results. The results show that our pet robots have a good effect on solving urban autism problems.
197 - Yuan Gao 2018
Deep reinforcement learning has recently been widely applied in robotics to study tasks such as locomotion and grasping, but its application to social human-robot interaction (HRI) remains a challenge. In this paper, we present a deep learning scheme that acquires a prior model of robot approaching behavior in simulation and applies it to real-world interaction with a physical robot approaching groups of humans. The scheme, which we refer to as Staged Social Behavior Learning (SSBL), considers different stages of learning in social scenarios. We learn robot approaching behaviors towards small groups in simulation and evaluate the performance of the model using objective and subjective measures in a perceptual study and a HRI user study with human participants. Results show that our model generates more socially appropriate behavior compared to a state-of-the-art model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا