ترغب بنشر مسار تعليمي؟ اضغط هنا

A Deep Learning Approach to Predict Blood Pressure from PPG Signals

97   0   0.0 ( 0 )
 نشر من قبل Ali Tazarv
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Blood Pressure (BP) is one of the four primary vital signs indicating the status of the bodys vital (life-sustaining) functions. BP is difficult to continuously monitor using a sphygmomanometer (i.e. a blood pressure cuff), especially in everyday-setting. However, other health signals which can be easily and continuously acquired, such as photoplethysmography (PPG), show some similarities with the Aortic Pressure waveform. Based on these similarities, in recent years several methods were proposed to predict BP from the PPG signal. Building on these results, we propose an advanced personalized data-driven approach that uses a three-layer deep neural network to estimate BP based on PPG signals. Different from previous work, the proposed model analyzes the PPG signal in time-domain and automatically extracts the most critical features for this specific application, then uses a variation of recurrent neural networks called Long-Short-Term-Memory (LSTM) to map the extracted features to the BP value associated with that time window. Experimental results on two separate standard hospital datasets, yielded absolute errors mean and absolute error standard deviation for systolic and diastolic BP values outperforming prior works.



قيم البحث

اقرأ أيضاً

Cardiovascular diseases are one of the most severe causes of mortality, taking a heavy toll of lives annually throughout the world. The continuous monitoring of blood pressure seems to be the most viable option, but this demands an invasive process, bringing about several layers of complexities. This motivates us to develop a method to predict the continuous arterial blood pressure (ABP) waveform through a non-invasive approach using photoplethysmogram (PPG) signals. In addition we explore the advantage of deep learning as it would free us from sticking to ideally shaped PPG signals only, by making handcrafted feature computation irrelevant, which is a shortcoming of the existing approaches. Thus, we present, PPG2ABP, a deep learning based method, that manages to predict the continuous ABP waveform from the input PPG signal, with a mean absolute error of 4.604 mmHg, preserving the shape, magnitude and phase in unison. However, the more astounding success of PPG2ABP turns out to be that the computed values of DBP, MAP and SBP from the predicted ABP waveform outperforms the existing works under several metrics, despite that PPG2ABP is not explicitly trained to do so.
Respiratory rate (RR) is a clinical sign representing ventilation. An abnormal change in RR is often the first sign of health deterioration as the body attempts to maintain oxygen delivery to its tissues. There has been a growing interest in remotely monitoring of RR in everyday settings which has made photoplethysmography (PPG) monitoring wearable devices an attractive choice. PPG signals are useful sources for RR extraction due to the presence of respiration-induced modulations in them. The existing PPG-based RR estimation methods mainly rely on hand-crafted rules and manual parameters tuning. An end-to-end deep learning approach was recently proposed, however, despite its automatic nature, the performance of this method is not ideal using the real world data. In this paper, we present an end-to-end and accurate pipeline for RR estimation using Cycle Generative Adversarial Networks (CycleGAN) to reconstruct respiratory signals from raw PPG signals. Our results demonstrate a higher RR estimation accuracy of up to 2$times$ (mean absolute error of 1.9$pm$0.3 using five fold cross validation) compared to the state-of-th-art using a identical publicly available dataset. Our results suggest that CycleGAN can be a valuable method for RR estimation from raw PPG signals.
Predicting clinical outcome is remarkably important but challenging. Research efforts have been paid on seeking significant biomarkers associated with the therapy response or/and patient survival. However, these biomarkers are generally costly and in vasive, and possibly dissatifactory for novel therapy. On the other hand, multi-modal, heterogeneous, unaligned temporal data is continuously generated in clinical practice. This paper aims at a unified deep learning approach to predict patient prognosis and therapy response, with easily accessible data, e.g., radiographics, laboratory and clinical information. Prior arts focus on modeling single data modality, or ignore the temporal changes. Importantly, the clinical time series is asynchronous in practice, i.e., recorded with irregular intervals. In this study, we formalize the prognosis modeling as a multi-modal asynchronous time series classification task, and propose a MIA-Prognosis framework with Measurement, Intervention and Assessment (MIA) information to predict therapy response, where a Simple Temporal Attention (SimTA) module is developed to process the asynchronous time series. Experiments on synthetic dataset validate the superiory of SimTA over standard RNN-based approaches. Furthermore, we experiment the proposed method on an in-house, retrospective dataset of real-world non-small cell lung cancer patients under anti-PD-1 immunotherapy. The proposed method achieves promising performance on predicting the immunotherapy response. Notably, our predictive model could further stratify low-risk and high-risk patients in terms of long-term survival.
68 - Kristian Snyder 2020
Occupationally-induced back pain is a leading cause of reduced productivity in industry. Detecting when a worker is lifting incorrectly and at increased risk of back injury presents significant possible benefits. These include increased quality of li fe for the worker due to lower rates of back injury and fewer workers compensation claims and missed time for the employer. However, recognizing lifting risk provides a challenge due to typically small datasets and subtle underlying features in accelerometer and gyroscope data. A novel method to classify a lifting dataset using a 2D convolutional neural network (CNN) and no manual feature extraction is proposed in this paper; the dataset consisted of 10 subjects lifting at various relative distances from the body with 720 total trials. The proposed deep CNN displayed greater accuracy (90.6%) compared to an alternative CNN and multilayer perceptron (MLP). A deep CNN could be adapted to classify many other activities that traditionally pose greater challenges in industrial environments due to their size and complexity.
Depression and post-traumatic stress disorder (PTSD) are psychiatric conditions commonly associated with experiencing a traumatic event. Estimating mental health status through non-invasive techniques such as activity-based algorithms can help to ide ntify successful early interventions. In this work, we used locomotor activity captured from 1113 individuals who wore a research grade smartwatch post-trauma. A convolutional variational autoencoder (VAE) architecture was used for unsupervised feature extraction from four weeks of actigraphy data. By using VAE latent variables and the participants pre-trauma physical health status as features, a logistic regression classifier achieved an area under the receiver operating characteristic curve (AUC) of 0.64 to estimate mental health outcomes. The results indicate that the VAE model is a promising approach for actigraphy data analysis for mental health outcomes in long-term studies.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا