ترغب بنشر مسار تعليمي؟ اضغط هنا

Assessment of Empathy in an Affective VR Environment using EEG Signals

97   0   0.0 ( 0 )
 نشر من قبل Maryam Alimardani
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the advancements in social robotics and virtual avatars, it becomes increasingly important that these agents adapt their behavior to the mood, feelings and personality of their users. One such aspect of the user is empathy. Whereas many studies measure empathy through offline measures that are collected after empathic stimulation (e.g. post-hoc questionnaires), the current study aimed to measure empathy online, using brain activity collected during the experience. Participants watched an affective 360 video of a child experiencing domestic violence in a virtual reality headset while their EEG signals were recorded. Results showed a significant attenuation of alpha, theta and delta asymmetry in the frontal and central areas of the brain. Moreover, a significant relationship between participants empathy scores and their frontal alpha asymmetry at baseline was found. These results demonstrate specific brain activity alterations when participants are exposed to an affective virtual reality environment, with the level of empathy as a personality trait being visible in brain activity during a baseline measurement. These findings suggest the potential of EEG measurements for development of passive brain-computer interfaces that assess the users affective responses in real-time and consequently adapt the behavior of socially intelligent agents for a personalized interaction.



قيم البحث

اقرأ أيضاً

Human affects are complex paradox and an active research domain in affective computing. Affects are traditionally determined through a self-report based psychometric questionnaire or through facial expression recognition. However, few state-of-the-ar ts pieces of research have shown the possibilities of recognizing human affects from psychophysiological and neurological signals. In this article, electroencephalogram (EEG) signals are used to recognize human affects. The electroencephalogram (EEG) of 100 participants are collected where they are given to watch one-minute video stimuli to induce different affective states. The videos with emotional tags have a variety range of affects including happy, sad, disgust, and peaceful. The experimental stimuli are collected and analyzed intensively. The interrelationship between the EEG signal frequencies and the ratings given by the participants are taken into consideration for classifying affective states. Advanced feature extraction techniques are applied along with the statistical features to prepare a fused feature vector of affective state recognition. Factor analysis methods are also applied to select discriminative features. Finally, several popular supervised machine learning classifier is applied to recognize different affective states from the discriminative feature vector. Based on the experiment, the designed random forest classifier produces 89.06% accuracy in classifying four basic affective states.
Critical task and cognition-based environments, such as in military and defense operations, aviation user-technology interaction evaluation on UI, understanding intuitiveness of a hardware model or software toolkit, etc. require an assessment of how much a particular task is generating mental workload on a user. This is necessary for understanding how those tasks, operations, and activities can be improvised and made better suited for the users so that they reduce the mental workload on the individual and the operators can use them with ease and less difficulty. However, a particular task can be gauged by a user as simple while for others it may be difficult. Understanding the complexity of a particular task can only be done on user level and we propose to do this by understanding the mental workload (MWL) generated on an operator while performing a task which requires processing a lot of information to get the task done. In this work, we have proposed an experimental setup which replicates modern day workload on doing regular day job tasks. We propose an approach to automatically evaluate the task complexity perceived by an individual by using electroencephalogram (EEG) data of a user during operation. Few crucial steps that are addressed in this work include extraction and optimization of different features and selection of relevant features for dimensionality reduction and using supervised machine learning techniques. In addition to this, performance results of the classifiers are compared using all features and also using only the selected features. From the results, it can be inferred that machine learning algorithms perform better as compared to traditional approaches for mental workload estimation.
83 - Matthew Lee 2020
Flow-like experiences at work are important for productivity and worker well-being. However, it is difficult to objectively detect when workers are experiencing flow in their work. In this paper, we investigate how to predict a workers focus state ba sed on physiological signals. We conducted a lab study to collect physiological data from knowledge workers experienced different levels of flow while performing work tasks. We used the nine characteristics of flow to design tasks that would induce different focus states. A manipulation check using the Flow Short Scale verified that participants experienced three distinct flow states, one overly challenging non-flow state, and two types of flow states, balanced flow, and automatic flow. We built machine learning classifiers that can distinguish between non-flow and flow states with 0.889 average AUC and rest states from working states with 0.98 average AUC. The results show that physiological sensing can detect focused flow states of knowledge workers and can enable ways to for individuals and organizations to improve both productivity and worker satisfaction.
We describe the experimental procedures for a dataset that we have made publicly available at https://doi.org/10.5281/zenodo.2649006 in mat and csv formats. This dataset contains electroencephalographic (EEG) recordings of 25 subjects testing the Bra in Invaders (Congedo, 2011), a visual P300 Brain-Computer Interface inspired by the famous vintage video game Space Invaders (Taito, Tokyo, Japan). The visual P300 is an event-related potential elicited by a visual stimulation, peaking 240-600 ms after stimulus onset. EEG data were recorded by 16 electrodes in an experiment that took place in the GIPSA-lab, Grenoble, France, in 2012 (Van Veen, 2013 and Congedo, 2013). Python code for manipulating the data is available at https://github.com/plcrodrigues/py.BI.EEG.2012-GIPSA. The ID of this dataset is BI.EEG.2012-GIPSA.
Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we r eview the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا