ترغب بنشر مسار تعليمي؟ اضغط هنا

Socially Impaired Robots: Human Social Disorders and Robots Socio-Emotional Intelligence

83   0   0.0 ( 0 )
 نشر من قبل Jonathan Vitale
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots future capability requirements. We provide evidence supporting the idea that social robots will require a combination of emotional intelligence and social intelligence, namely socio-emotional intelligence. We argue that a robot with a simple socio-emotional process requires a simulation-driven model of intelligence. Finally, we provide some critical guidelines for designing future socio-emotional robots.


قيم البحث

اقرأ أيضاً

Physical embodiment is a required component for robots that are structurally coupled with their real-world environments. However, most socially interactive robots do not need to physically interact with their environments in order to perform their ta sks. When and why should embodied robots be used instead of simpler and cheaper virtual agents? This paper reviews the existing work that explores the role of physical embodiment in socially interactive robots. This class consists of robots that are not only capable of engaging in social interaction with humans, but are using primarily their social capabilities to perform their desired functions. Socially interactive robots provide entertainment, information, and/or assistance; this last category is typically encompassed by socially assistive robotics. In all cases, such robots can achieve their primary functions without performing functional physical work. To comprehensively evaluate the existing body of work on embodiment, we first review work from established related fields including psychology, philosophy, and sociology. We then systematically review 65 studies evaluating aspects of embodiment published from 2003 to 2017 in major peer-reviewed robotics publication venues. We examine relevant aspects of the selected studies, focusing on the embodiments compared, tasks evaluated, social roles of robots, and measurements. We introduce three taxonomies for the types of robot embodiment, robot social roles, and human-robot tasks. These taxonomies are used to deconstruct the design and interaction spaces of socially interactive robots and facilitate analysis and discussion of the reviewed studies. We use this newly-defined methodology to critically discuss existing works, revealing topics within embodiment research for social interaction, assistive robotics, and service robotics.
In this paper, we investigate the roles that social robots can take in physical exercise with human partners. In related work, robots or virtual intelligent agents take the role of a coach or instructor whereas in other approaches they are used as mo tivational aids. These are two paradigms, so to speak, within the small but growing area of robots for social exercise. We designed an online questionnaire to test whether the preferred role in which people want to see robots would be the companion or the coach. The questionnaire asks people to imagine working out with a robot with the help of three utilized questionnaires: (1) CART-Q which is used for judging coach-athlete relationships, (2) the mind perception questionnaire and (3) the System Usability Scale (SUS). We present the methodology, some preliminary results as well as our intended future work on personal robots for coaching.
Learning from demonstration (LfD) is commonly considered to be a natural and intuitive way to allow novice users to teach motor skills to robots. However, it is important to acknowledge that the effectiveness of LfD is heavily dependent on the qualit y of teaching, something that may not be assured with novices. It remains an open question as to the most effective way of guiding demonstrators to produce informative demonstrations beyond ad hoc advice for specific teaching tasks. To this end, this paper investigates the use of machine teaching to derive an index for determining the quality of demonstrations and evaluates its use in guiding and training novices to become better teachers. Experiments with a simple learner robot suggest that guidance and training of teachers through the proposed approach can lead to up to 66.5% decrease in error in the learnt skill.
Robotic materials are multi-robot systems formulated to leverage the low-order computation and actuation of the constituents to manipulate the high-order behavior of the entire material. We study the behaviors of ensembles composed of smart active pa rticles, smarticles. Smarticles are small, low cost robots equipped with basic actuation and sensing abilities that are individually incapable of rotating or displacing. We demonstrate that a supersmarticle, composed of many smarticles constrained within a bounding membrane, can harness the internal collisions of the robotic material among the constituents and the membrane to achieve diffusive locomotion. The emergent diffusion can be directed by modulating the robotic material properties in response to a light source, analogous to biological phototaxis. The light source introduces asymmetries within the robotic material, resulting in modified populations of interaction modes and dynamics which ultimately result in supersmarticle biased locomotion. We present experimental methods and results for the robotic material which moves with a directed displacement in response to a light source.
When mobile robots maneuver near people, they run the risk of rudely blocking their paths; but not all people behave the same around robots. People that have not noticed the robot are the most difficult to predict. This paper investigates how mobile robots can generate acceptable paths in dynamic environments by predicting human behavior. Here, human behavior may include both physical and mental behavior, we focus on the latter. We introduce a simple safe interaction model: when a human seems unaware of the robot, it should avoid going too close. In this study, people around robots are detected and tracked using sensor fusion and filtering techniques. To handle uncertainties in the dynamic environment, a Partially-Observable Markov Decision Process Model (POMDP) is used to formulate a navigation planning problem in the shared environment. Peoples awareness of robots is inferred and included as a state and reward model in the POMDP. The proposed planner enables a robot to change its navigation plan based on its perception of each persons robot-awareness. As far as we can tell, this is a new capability. We conduct simulation and experiments using the Toyota Human Support Robot (HSR) to validate our approach. We demonstrate that the proposed framework is capable of running in real-time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا