ترغب بنشر مسار تعليمي؟ اضغط هنا

Eye-gaze Estimation with HEOG and Neck EMG using Deep Neural Networks

100   0   0.0 ( 0 )
 نشر من قبل Zhen Fu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Hearing-impaired listeners usually have troubles attending target talker in multi-talker scenes, even with hearing aids (HAs). The problem can be solved with eye-gaze steering HAs, which requires listeners eye-gazing on the target. In a situation where head rotates, eye-gaze is subject to both behaviors of saccade and head rotation. However, existing methods of eye-gaze estimation did not work reliably, since the listeners strategy of eye-gaze varies and measurements of the two behaviors were not properly combined. Besides, existing methods were based on hand-craft features, which could overlook some important information. In this paper, a head-fixed and a head-free experiments were conducted. We used horizontal electrooculography (HEOG) and neck electromyography (NEMG), which separately measured saccade and head rotation to commonly estimate eye-gaze. Besides traditional classifier and hand-craft features, deep neural networks (DNN) were introduced to automatically extract features from intact waveforms. Evaluation results showed that when the input was HEOG with inertial measurement unit, the best performance of our proposed DNN classifiers achieved 93.3%; and when HEOG was with NEMG together, the accuracy reached 72.6%, higher than that with HEOG (about 71.0%) or NEMG (about 35.7%) alone. These results indicated the feasibility to estimate eye-gaze with HEOG and NEMG.



قيم البحث

اقرأ أيضاً

The world is moving towards faster data transformation with more efficient localization of a user being the preliminary requirement. This work investigates the use of a deep learning technique for wireless localization, considering both millimeter-wa ve (mmWave) and sub-6 GHz frequencies. The capability of learning a new neural network model makes the localization process easier and faster. In this study, a Deep Neural Network (DNN) was used to localize User Equipment (UE) in two static scenarios. We propose two different methods to train a neural network, one using channel parameters (features) and another using a channel response vector and compare their performances using preliminary computer simulations. We observe that the former approach produces high localization accuracy considering that all of the users have a fixed number of multipath components (MPCs), this method is reliant on the number of MPCs. On the other hand, the latter approach is independent of the MPCs, but it performs relatively poorly compared to the first approach.
This paper presents a novel compressed sensing (CS) approach to high dimensional wireless channel estimation by optimizing the input to a deep generative network. Channel estimation using generative networks relies on the assumption that the reconstr ucted channel lies in the range of a generative model. Channel reconstruction using generative priors outperforms conventional CS techniques and requires fewer pilots. It also eliminates the need of a priori knowledge of the sparsifying basis, instead using the structure captured by the deep generative model as a prior. Using this prior, we also perform channel estimation from one-bit quantized pilot measurements, and propose a novel optimization objective function that attempts to maximize the correlation between the received signal and the generators channel estimate while minimizing the rank of the channel estimate. Our approach significantly outperforms sparse signal recovery methods such as Orthogonal Matching Pursuit (OMP) and Approximate Message Passing (AMP) algorithms such as EM-GM-AMP for narrowband mmWave channel reconstruction, and its execution time is not noticeably affected by the increase in the number of received pilot symbols.
Using mel-spectrograms over conventional MFCCs features, we assess the abilities of convolutional neural networks to accurately recognize and classify emotions from speech data. We introduce FSER, a speech emotion recognition model trained on four va lid speech databases, achieving a high-classification accuracy of 95,05%, over 8 different emotion classes: anger, anxiety, calm, disgust, happiness, neutral, sadness, surprise. On each benchmark dataset, FSER outperforms the best models introduced so far, achieving a state-of-the-art performance. We show that FSER stays reliable, independently of the language, sex identity, and any other external factor. Additionally, we describe how FSER could potentially be used to improve mental and emotional health care and how our analysis and findings serve as guidelines and benchmarks for further works in the same direction.
We propose an image-classification method to predict the perceived-relevance of text documents from eye-movements. An eye-tracking study was conducted where participants read short news articles, and rated them as relevant or irrelevant for answering a trigger question. We encode participants eye-movement scanpaths as images, and then train a convolutional neural network classifier using these scanpath images. The trained classifier is used to predict participants perceived-relevance of news articles from the corresponding scanpath images. This method is content-independent, as the classifier does not require knowledge of the screen-content, or the users information-task. Even with little data, the image classifier can predict perceived-relevance with up to 80% accuracy. When compared to similar eye-tracking studies from the literature, this scanpath image classification method outperforms previously reported metrics by appreciable margins. We also attempt to interpret how the image classifier differentiates between scanpaths on relevant and irrelevant documents.
Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-pose d problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا