Do you want to publish a course? Click here

PPG2ABP: Translating Photoplethysmogram (PPG) Signals to Arterial Blood Pressure (ABP) Waveforms using Fully Convolutional Neural Networks

92   0   0.0 ( 0 )
 Added by Nabil Ibtehaz
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Cardiovascular diseases are one of the most severe causes of mortality, taking a heavy toll of lives annually throughout the world. The continuous monitoring of blood pressure seems to be the most viable option, but this demands an invasive process, bringing about several layers of complexities. This motivates us to develop a method to predict the continuous arterial blood pressure (ABP) waveform through a non-invasive approach using photoplethysmogram (PPG) signals. In addition we explore the advantage of deep learning as it would free us from sticking to ideally shaped PPG signals only, by making handcrafted feature computation irrelevant, which is a shortcoming of the existing approaches. Thus, we present, PPG2ABP, a deep learning based method, that manages to predict the continuous ABP waveform from the input PPG signal, with a mean absolute error of 4.604 mmHg, preserving the shape, magnitude and phase in unison. However, the more astounding success of PPG2ABP turns out to be that the computed values of DBP, MAP and SBP from the predicted ABP waveform outperforms the existing works under several metrics, despite that PPG2ABP is not explicitly trained to do so.



rate research

Read More

Blood Pressure (BP) is one of the four primary vital signs indicating the status of the bodys vital (life-sustaining) functions. BP is difficult to continuously monitor using a sphygmomanometer (i.e. a blood pressure cuff), especially in everyday-setting. However, other health signals which can be easily and continuously acquired, such as photoplethysmography (PPG), show some similarities with the Aortic Pressure waveform. Based on these similarities, in recent years several methods were proposed to predict BP from the PPG signal. Building on these results, we propose an advanced personalized data-driven approach that uses a three-layer deep neural network to estimate BP based on PPG signals. Different from previous work, the proposed model analyzes the PPG signal in time-domain and automatically extracts the most critical features for this specific application, then uses a variation of recurrent neural networks called Long-Short-Term-Memory (LSTM) to map the extracted features to the BP value associated with that time window. Experimental results on two separate standard hospital datasets, yielded absolute errors mean and absolute error standard deviation for systolic and diastolic BP values outperforming prior works.
The accurate measurement of blood pressure (BP) is an important prerequisite for the reliable diagnosis and efficient management of hypertension and other medical conditions. Office Blood Pressure Measurement (OBP) is a technique performed in-office with the sphygmomanometer, while Ambulatory Blood Pressure Monitoring (ABPM) is a technique that measures blood pressure during 24h. The BP fluctuations also depend on other factors such as physical activity, temperature, mood, age, sex, any pathologies, a hormonal activity that may intrinsically influence the differences between OBP and ABPM. The aim of this study is to examine the possible influence of sex on the discrepancies between OBP and ABPM in 872 subjects with known or suspected hypertension. A significant correlation was observed between OBP and ABPM mean values calculated during the day, night and 24h (ABPMday, ABPMnight, ABPM24h) in both groups (p<0.0001). The main finding of this study is that no difference between sexes was observed in the relation between OBP and mean ABMP values except between systolic OBP and systolic ABPM during the night. In addition, this study showed a moderate correlation between BPs obtained with the two approaches with a great dispersion around the regression line which suggests that the two approaches cannot be used interchangeably.
Malaria is a female anopheles mosquito-bite inflicted life-threatening disease which is considered endemic in many parts of the world. This article focuses on improving malaria detection from patches segmented from microscopic images of red blood cell smears by introducing a deep convolutional neural network. Compared to the traditional methods that use tedious hand engineering feature extraction, the proposed method uses deep learning in an end-to-end arrangement that performs both feature extraction and classification directly from the raw segmented patches of the red blood smears. The dataset used in this study was taken from National Institute of Health named NIH Malaria Dataset. The evaluation metric accuracy and loss along with 5-fold cross validation was used to compare and select the best performing architecture. To maximize the performance, existing standard pre-processing techniques from the literature has also been experimented. In addition, several other complex architectures have been implemented and tested to pick the best performing model. A holdout test has also been conducted to verify how well the proposed model generalizes on unseen data. Our best model achieves an accuracy of almost 97.77%.
Recently, the end-to-end approach that learns hierarchical representations from raw data using deep convolutional neural networks has been successfully explored in the image, text and speech domains. This approach was applied to musical signals as well but has been not fully explored yet. To this end, we propose sample-level deep convolutional neural networks which learn representations from very small grains of waveforms (e.g. 2 or 3 samples) beyond typical frame-level input representations. Our experiments show how deep architectures with sample-level filters improve the accuracy in music auto-tagging and they provide results comparable to previous state-of-the-art performances for the Magnatagatune dataset and Million Song Dataset. In addition, we visualize filters learned in a sample-level DCNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency along layer, such as mel-frequency spectrogram that is widely used in music classification systems.
Graph convolutional neural networks (GCNNs) are a powerful extension of deep learning techniques to graph-structured data problems. We empirically evaluate several pooling methods for GCNNs, and combinations of those graph pooling methods with three different architectures: GCN, TAGCN, and GraphSAGE. We confirm that graph pooling, especially DiffPool, improves classification accuracy on popular graph classification datasets and find that, on average, TAGCN achieves comparable or better accuracy than GCN and GraphSAGE, particularly for datasets with larger and sparser graph structures.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا