Do you want to publish a course? Click here

A 1D-CNN Based Deep Learning Technique for Sleep Apnea Detection in IoT Sensors

135   0   0.0 ( 0 )
 Added by Arlene John
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Internet of Things (IoT) enabled wearable sensors for health monitoring are widely used to reduce the cost of personal healthcare and improve quality of life. The sleep apnea-hypopnea syndrome, characterized by the abnormal reduction or pause in breathing, greatly affects the quality of sleep of an individual. This paper introduces a novel method for apnea detection (pause in breathing) from electrocardiogram (ECG) signals obtained from wearable devices. The novelty stems from the high resolution of apnea detection on a second-by-second basis, and this is achieved using a 1-dimensional convolutional neural network for feature extraction and detection of sleep apnea events. The proposed method exhibits an accuracy of 99.56% and a sensitivity of 96.05%. This model outperforms several lower resolution state-of-the-art apnea detection methods. The complexity of the proposed model is analyzed. We also analyze the feasibility of model pruning and binarization to reduce the resource requirements on a wearable IoT device. The pruned model with 80% sparsity exhibited an accuracy of 97.34% and a sensitivity of 86.48%. The binarized model exhibited an accuracy of 75.59% and sensitivity of 63.23%. The performance of low complexity patient-specific models derived from the generic model is also studied to analyze the feasibility of retraining existing models to fit patient-specific requirements. The patient-specific models on average exhibited an accuracy of 97.79% and sensitivity of 92.23%. The source code for this work is made publicly available.



rate research

Read More

The abnormal pause or rate reduction in breathing is known as the sleep-apnea hypopnea syndrome and affects the quality of sleep of an individual. A novel method for the detection of sleep apnea events (pause in breathing) from peripheral oxygen saturation (SpO2) signals obtained from wearable devices is discussed in this paper. The paper details an apnea detection algorithm of a very high resolution on a per-second basis for which a 1-dimensional convolutional neural network -- which we termed SomnNET -- is developed. This network exhibits an accuracy of 97.08% and outperforms several lower resolution state-of-the-art apnea detection methods. The feasibility of model pruning and binarization to reduce the computational complexity is explored. The pruned network with 80% sparsity exhibited an accuracy of 89.75%, and the binarized network exhibited an accuracy of 68.22%. The performance of the proposed networks is compared against several state-of-the-art algorithms.
With recent advancements in deep learning methods, automatically learning deep features from the original data is becoming an effective and widespread approach. However, the hand-crafted expert knowledge-based features are still insightful. These expert-curated features can increase the models generalization and remind the model of some data characteristics, such as the time interval between two patterns. It is particularly advantageous in tasks with the clinically-relevant data, where the data are usually limited and complex. To keep both implicit deep features and expert-curated explicit features together, an effective fusion strategy is becoming indispensable. In this work, we focus on a specific clinical application, i.e., sleep apnea detection. In this context, we propose a contrastive learning-based cross attention framework for sleep apnea detection (named ConCAD). The cross attention mechanism can fuse the deep and expert features by automatically assigning attention weights based on their importance. Contrastive learning can learn better representations by keeping the instances of each class closer and pushing away instances from different classes in the embedding space concurrently. Furthermore, a new hybrid loss is designed to simultaneously conduct contrastive learning and classification by integrating a supervised contrastive loss with a cross-entropy loss. Our proposed framework can be easily integrated into standard deep learning models to utilize expert knowledge and contrastive learning to boost performance. As demonstrated on two public ECG dataset with sleep apnea annotation, ConCAD significantly improves the detection performance and outperforms state-of-art benchmark methods.
Supervised machine learning applications in the health domain often face the problem of insufficient training datasets. The quantity of labelled data is small due to privacy concerns and the cost of data acquisition and labelling by a medical expert. Furthermore, it is quite common that collected data are unbalanced and getting enough data to personalize models for individuals is very expensive or even infeasible. This paper addresses these problems by (1) designing a recurrent Generative Adversarial Network to generate realistic synthetic data and to augment the original dataset, (2) enabling the generation of balanced datasets based on heavily unbalanced dataset, and (3) to control the data generation in such a way that the generated data resembles data from specific individuals. We apply these solutions for sleep apnea detection and study in the evaluation the performance of four well-known techniques, i.e., K-Nearest Neighbour, Random Forest, Multi-Layer Perceptron, and Support Vector Machine. All classifiers exhibit in the experiments a consistent increase in sensitivity and a kappa statistic increase by between 0.007 and 0.182.
Life-threatening ventricular arrhythmias (VA) are the leading cause of sudden cardiac death (SCD), which is the most significant cause of natural death in the US. The implantable cardioverter defibrillator (ICD) is a small device implanted to patients under high risk of SCD as a preventive treatment. The ICD continuously monitors the intracardiac rhythm and delivers shock when detecting the life-threatening VA. Traditional methods detect VA by setting criteria on the detected rhythm. However, those methods suffer from a high inappropriate shock rate and require a regular follow-up to optimize criteria parameters for each ICD recipient. To ameliorate the challenges, we propose the personalized computing framework for deep learning based VA detection on medical IoT systems. The system consists of intracardiac and surface rhythm monitors, and the cloud platform for data uploading, diagnosis, and CNN model personalization. We equip the system with real-time inference on both intracardiac and surface rhythm monitors. To improve the detection accuracy, we enable the monitors to detect VA collaboratively by proposing the cooperative inference. We also introduce the CNN personalization for each patient based on the computing framework to tackle the unlabeled and limited rhythm data problem. When compared with the traditional detection algorithm, the proposed method achieves comparable accuracy on VA rhythm detection and 6.6% reduction in inappropriate shock rate, while the average inference latency is kept at 71ms.
Activity recognition is the ability to identify and recognize the action or goals of the agent. The agent can be any object or entity that performs action that has end goals. The agents can be a single agent performing the action or group of agents performing the actions or having some interaction. Human activity recognition has gained popularity due to its demands in many practical applications such as entertainment, healthcare, simulations and surveillance systems. Vision based activity recognition is gaining advantage as it does not require any human intervention or physical contact with humans. Moreover, there are set of cameras that are networked with the intention to track and recognize the activities of the agent. Traditional applications that were required to track or recognize human activities made use of wearable devices. However, such applications require physical contact of the person. To overcome such challenges, vision based activity recognition system can be used, which uses a camera to record the video and a processor that performs the task of recognition. The work is implemented in two stages. In the first stage, an approach for the Implementation of Activity recognition is proposed using background subtraction of images, followed by 3D- Convolutional Neural Networks. The impact of using Background subtraction prior to 3D-Convolutional Neural Networks has been reported. In the second stage, the work is further extended and implemented on Raspberry Pi, that can be used to record a stream of video, followed by recognizing the activity that was involved in the video. Thus, a proof-of-concept for activity recognition using small, IoT based device, is provided, which can enhance the system and extend its applications in various forms like, increase in portability, networking, and other capabilities of the device.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا