ترغب بنشر مسار تعليمي؟ اضغط هنا

The Power of Color: A Study on the Effective Use of Colored Light in Human-Robot Interaction

116   0   0.0 ( 0 )
 نشر من قبل Aljoscha P\\\"ortner
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In times of more and more complex interaction techniques, we point out the powerfulness of colored light as a simple and cheap feedback mechanism. Since it is visible over a distance and does not interfere with other modalities, it is especially interesting for mobile robots. In an online survey, we asked 56 participants to choose the most appropriate colors for scenarios that were presented in the form of videos. In these scenarios a mobile robot accomplished tasks, in some with success, in others it failed because the task is not feasible, in others it stopped because it waited for help. We analyze in what way the color preferences differ between these three categories. The results show a connection between colors and meanings and that it depends on the participants technical affinity, experience with robots and gender how clear the color preference is for a certain category. Finally, we found out that the participants favorite color is not related to color preferences.



قيم البحث

اقرأ أيضاً

We design and develop a new shared Augmented Reality (AR) workspace for Human-Robot Interaction (HRI), which establishes a bi-directional communication between human agents and robots. In a prototype system, the shared AR workspace enables a shared p erception, so that a physical robot not only perceives the virtual elements in its own view but also infers the utility of the human agent--the cost needed to perceive and interact in AR--by sensing the human agents gaze and pose. Such a new HRI design also affords a shared manipulation, wherein the physical robot can control and alter virtual objects in AR as an active agent; crucially, a robot can proactively interact with human agents, instead of purely passively executing received commands. In experiments, we design a resource collection game that qualitatively demonstrates how a robot perceives, processes, and manipulates in AR and quantitatively evaluates the efficacy of HRI using the shared AR workspace. We further discuss how the system can potentially benefit future HRI studies that are otherwise challenging.
This record contains the proceedings of the 2020 Workshop on Assessing, Explaining, and Conveying Robot Proficiency for Human-Robot Teaming, which was held in conjunction with the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ). This workshop was originally scheduled to occur in Cambridge, UK on March 23, but was moved to a set of online talks due to the COVID-19 pandemic.
We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) data set. This is a large multimodal data set of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive e ating. The data set provides human, robot, and environmental data views of twenty-four different people engaged in an assistive eating task with a 6 degree-of-freedom (DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third person stereo video, and the joint positions of the 6 DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This data set could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files.
Human-robot interaction can be regarded as a flow between users and robots. Designing good interaction flows takes a lot of effort and needs to be field tested. Unfortunately, the interaction flow design process is often very disjointed, with users e xperiencing prototypes, designers forming those prototypes, and developers implementing them as independent processes. In this paper, we present the Interaction Flow Editor (IFE), a new human-robot interaction prototyping tool that enables everyday users to create and modify their own interactions, while still providing a full suite of features that is powerful enough for developers and designers to create complex interactions. We also discuss the Flow Engine, a flexible and adaptable framework for executing robot interaction flows authors through the IFE. Finally, we present our case study results that demonstrates how older adults, aged 70 and above, can design and iterate interactions in real-time on a robot using the IFE.
We describe a multi-phased Wizard-of-Oz approach to collecting human-robot dialogue in a collaborative search and navigation task. The data is being used to train an initial automated robot dialogue system to support collaborative exploration tasks. In the first phase, a wizard freely typed robot utterances to human participants. For the second phase, this data was used to design a GUI that includes buttons for the most common communications, and templates for communications with varying parameters. Comparison of the data gathered in these phases show that the GUI enabled a faster pace of dialogue while still maintaining high coverage of suitable responses, enabling more efficient targeted data collection, and improvements in natural language understanding using GUI-collected data. As a promising first step towards interactive learning, this work shows that our approach enables the collection of useful training data for navigation-based HRI tasks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا