Do you want to publish a course? Click here

Heartbeats in the Wild: A Field Study Exploring ECG Biometrics in Everyday Life

118   0   0.0 ( 0 )
 Added by Daniel Buschek
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper reports on an in-depth study of electrocardiogram (ECG) biometrics in everyday life. We collected ECG data from 20 people over a week, using a non-medical chest tracker. We evaluated user identification accuracy in several scenarios and observed equal error rates of 9.15% to 21.91%, heavily depending on 1) the number of days used for training, and 2) the number of heartbeats used per identification decision. We conclude that ECG biometrics can work in the wild but are less robust than expected based on the literature, highlighting that previous lab studies obtained highly optimistic results with regard to real life deployments. We explain this with noise due to changing body postures and states as well as interrupted measures. We conclude with implications for future research and the design of ECG biometrics systems for real world deployments, including critical reflections on privacy.



rate research

Read More

Body sounds provide rich information about the state of the human body and can be useful in many medical applications. Auscultation, the practice of listening to body sounds, has been used for centuries in respiratory and cardiac medicine to diagnose or track disease progression. To date, however, its use has been confined to clinical and highly controlled settings. Our work addresses this limitation: we devise a chest-mounted wearable for continuous monitoring of body sounds, that leverages data processing algorithms that run on-device. We concentrate on the detection of heart sounds to perform heart rate monitoring. To improve robustness to ambient noise and motion artefacts, our device uses an algorithm that explicitly segments the collected audio into the phases of the cardiac cycle. Our pilot study with 9 users demonstrates that it is possible to obtain heart rate estimates that are competitive with commercial heart rate monitors, with low enough power consumption for continuous use.
Designing future IoT ecosystems requires new approaches and perspectives to understand everyday practices. While researchers recognize the importance of understanding social aspects of everyday objects, limited studies have explored the possibilities of combining data-driven patterns with human interpretations to investigate emergent relationships among objects. This work presents Thing Constellation Visualizer (thingCV), a novel interactive tool for visualizing the social network of objects based on their co-occurrence as computed from a large collection of photos. ThingCV enables perspective-changing design explorations over the network of objects with scalable links. Two exploratory workshops were conducted to investigate how designers navigate and make sense of a network of objects through thingCV. The results of eight participants showed that designers were actively engaged in identifying interesting objects and their associated clusters of related objects. The designers projected social qualities onto the identified objects and their communities. Furthermore, the designers changed their perspectives to revisit familiar contexts and to generate new insights through the exploration process. This work contributes a novel approach to combining data-driven models with designerly interpretations of thing constellation towards More-Than Human-Centred Design of IoT ecosystems.
The complex nature of intelligent systems motivates work on supporting users during interaction, for example through explanations. However, as of yet, there is little empirical evidence in regard to specific problems users face when applying such systems in everyday situations. This paper contributes a novel method and analysis to investigate such problems as reported by users: We analysed 45,448 reviews of four apps on the Google Play Store (Facebook, Netflix, Google Maps and Google Assistant) with sentiment analysis and topic modelling to reveal problems during interaction that can be attributed to the apps algorithmic decision-making. We enriched this data with users coping and support strategies through a follow-up online survey (N=286). In particular, we found problems and strategies related to content, algorithm, user choice, and feedback. We discuss corresponding implications for designing user support, highlighting the importance of user control and explanations of output, rather than processes.
The interaction between the vestibular and ocular system has primarily been studied in controlled environments. Consequently, off-the shelf tools for categorization of gaze events (e.g. fixations, pursuits, saccade) fail when head movements are allowed. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye+head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye+head rotational velocities (deg/s), infrared eye images and scene imagery (RGB+D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.72 sample based Cohens $kappa$. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve $sim$90$%$ human performance in detecting fixations and saccades but fall short (60$%$) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best-performing model revealed a reliance upon absolute eye and head velocity, indicating that classification does not require spatial alignment of the head and eye tracking coordinate systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.
In recent years, physiological signal based authentication has shown great promises,for its inherent robustness against forgery. Electrocardiogram (ECG) signal, being the most widely studied biosignal, has also received the highest level of attention in this regard. It has been proven with numerous studies that by analyzing ECG signals from different persons, it is possible to identify them, with acceptable accuracy. In this work, we present, EDITH, a deep learning-based framework for ECG biometrics authentication system. Moreover, we hypothesize and demonstrate that Siamese architectures can be used over typical distance metrics for improved performance. We have evaluated EDITH using 4 commonly used datasets and outperformed the prior works using less number of beats. EDITH performs competitively using just a single heartbeat (96-99.75% accuracy) and can be further enhanced by fusing multiple beats (100% accuracy from 3 to 6 beats). Furthermore, the proposed Siamese architecture manages to reduce the identity verification Equal Error Rate (EER) to 1.29%. A limited case study of EDITH with real-world experimental data also suggests its potential as a practical authentication system.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا