Do you want to publish a course? Click here

Detecting Affective Flow States of Knowledge Workers Using Physiological Sensors

84   0   0.0 ( 0 )
 Added by Matthew Lee
 Publication date 2020
and research's language is English
 Authors Matthew Lee




Ask ChatGPT about the research

Flow-like experiences at work are important for productivity and worker well-being. However, it is difficult to objectively detect when workers are experiencing flow in their work. In this paper, we investigate how to predict a workers focus state based on physiological signals. We conducted a lab study to collect physiological data from knowledge workers experienced different levels of flow while performing work tasks. We used the nine characteristics of flow to design tasks that would induce different focus states. A manipulation check using the Flow Short Scale verified that participants experienced three distinct flow states, one overly challenging non-flow state, and two types of flow states, balanced flow, and automatic flow. We built machine learning classifiers that can distinguish between non-flow and flow states with 0.889 average AUC and rest states from working states with 0.98 average AUC. The results show that physiological sensing can detect focused flow states of knowledge workers and can enable ways to for individuals and organizations to improve both productivity and worker satisfaction.



rate research

Read More

Social biases based on gender, race, etc. have been shown to pollute machine learning (ML) pipeline predominantly via biased training datasets. Crowdsourcing, a popular cost-effective measure to gather labeled training datasets, is not immune to the inherent social biases of crowd workers. To ensure such social biases arent passed onto the curated datasets, its important to know how biased each crowd worker is. In this work, we propose a new method based on counterfactual fairness to quantify the degree of inherent social bias in each crowd worker. This extra information can be leveraged together with individual worker responses to curate a less biased dataset.
Hand Gesture Recognition (HGR) based on inertial data has grown considerably in recent years, with the state-of-the-art approaches utilizing a single handheld sensor and a vocabulary comprised of simple gestures. In this work we explore the benefits of using multiple inertial sensors. Using WaveGlove, a custom hardware prototype in the form of a glove with five inertial sensors, we acquire two datasets consisting of over $11000$ samples. To make them comparable with prior work, they are normalized along with $9$ other publicly available datasets, and subsequently used to evaluate a range of Machine Learning approaches for gesture recognition, including a newly proposed Transformer-based architecture. Our results show that even complex gestures involving different fingers can be recognized with high accuracy. An ablation study performed on the acquired datasets demonstrates the importance of multiple sensors, with an increase in performance when using up to three sensors and no significant improvements beyond that.
With the advancements in social robotics and virtual avatars, it becomes increasingly important that these agents adapt their behavior to the mood, feelings and personality of their users. One such aspect of the user is empathy. Whereas many studies measure empathy through offline measures that are collected after empathic stimulation (e.g. post-hoc questionnaires), the current study aimed to measure empathy online, using brain activity collected during the experience. Participants watched an affective 360 video of a child experiencing domestic violence in a virtual reality headset while their EEG signals were recorded. Results showed a significant attenuation of alpha, theta and delta asymmetry in the frontal and central areas of the brain. Moreover, a significant relationship between participants empathy scores and their frontal alpha asymmetry at baseline was found. These results demonstrate specific brain activity alterations when participants are exposed to an affective virtual reality environment, with the level of empathy as a personality trait being visible in brain activity during a baseline measurement. These findings suggest the potential of EEG measurements for development of passive brain-computer interfaces that assess the users affective responses in real-time and consequently adapt the behavior of socially intelligent agents for a personalized interaction.
209 - Rinita Roy , Linus W. Dietz 2019
Mobile proactive tourist recommender systems can support tourists by recommending the best choice depending on different contexts related to herself and the environment. In this paper, we propose to utilize wearable sensors to gather health information about a tourist and use them for recommending tourist activities. We discuss a range of wearable devices, sensors to infer physiological conditions of the users, and exemplify the feasibility using a popular self-quantification mobile app. Our main contribution then comprises a data model to derive relations between the parameters measured by the wearable sensors, such as heart rate, body temperature, blood pressure, and use them to infer the physiological condition of a user. This model can then be used to derive classes of tourist activities that determine which items should be recommended.
Coronaviruses are a famous family of viruses that cause illness in both humans and animals. The new type of coronavirus COVID-19 was firstly discovered in Wuhan, China. However, recently, the virus has widely spread in most of the world and causing a pandemic according to the World Health Organization (WHO). Further, nowadays, all the world countries are striving to control the COVID-19. There are many mechanisms to detect coronavirus including clinical analysis of chest CT scan images and blood test results. The confirmed COVID-19 patient manifests as fever, tiredness, and dry cough. Particularly, several techniques can be used to detect the initial results of the virus such as medical detection Kits. However, such devices are incurring huge cost, taking time to install them and use. Therefore, in this paper, a new framework is proposed to detect COVID-19 using built-in smartphone sensors. The proposal provides a low-cost solution, since most of radiologists have already held smartphones for different daily-purposes. Not only that but also ordinary people can use the framework on their smartphones for the virus detection purposes. Nowadays Smartphones are powerful with existing computation-rich processors, memory space, and large number of sensors including cameras, microphone, temperature sensor, inertial sensors, proximity, colour-sensor, humidity-sensor, and wireless chipsets/sensors. The designed Artificial Intelligence (AI) enabled framework reads the smartphone sensors signal measurements to predict the grade of severity of the pneumonia as well as predicting the result of the disease.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا