Do you want to publish a course? Click here

Heart Sound Segmentation using Bidirectional LSTMs with Attention

156   0   0.0 ( 0 )
 Added by Tharindu Fernando
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper proposes a novel framework for the segmentation of phonocardiogram (PCG) signals into heart states, exploiting the temporal evolution of the PCG as well as considering the salient information that it provides for the detection of the heart state. We propose the use of recurrent neural networks and exploit recent advancements in attention based learning to segment the PCG signal. This allows the network to identify the most salient aspects of the signal and disregard uninformative information. The proposed method attains state-of-the-art performance on multiple benchmarks including both human and animal heart recordings. Furthermore, we empirically analyse different feature combinations including envelop features, wavelet and Mel Frequency Cepstral Coefficients (MFCC), and provide quantitative measurements that explore the importance of different features in the proposed approach. We demonstrate that a recurrent neural network coupled with attention mechanisms can effectively learn from irregular and noisy PCG recordings. Our analysis of different feature combinations shows that MFCC features and their derivatives offer the best performance compared to classical wavelet and envelop features. Heart sound segmentation is a crucial pre-processing step for many diagnostic applications. The proposed method provides a cost effective alternative to labour extensive manual segmentation, and provides a more accurate segmentation than existing methods. As such, it can improve the performance of further analysis including the detection of murmurs and ejection clicks. The proposed method is also applicable for detection and segmentation of other one dimensional biomedical signals.



rate research

Read More

Cardiac auscultation is the most practiced non-invasive and cost-effective procedure for the early diagnosis of heart diseases. While machine learning based systems can aid in automatically screening patients, the robustness of these systems is affected by numerous factors including the stethoscope/sensor, environment, and data collection protocol. This paper studies the adverse effect of domain variability on heart sound abnormality detection and develops strategies to address this problem. Methods: We propose a novel Convolutional Neural Network (CNN) layer, consisting of time-convolutional (tConv) units, that emulate Finite Impulse Response (FIR) filters. The filter coefficients can be updated via backpropagation and be stacked in the front-end of the network as a learnable filterbank. Results: On publicly available multi-domain datasets, the proposed method surpasses the top-scoring systems found in the literature for heart sound abnormality detection (a binary classification task). We utilized sensitivity, specificity, F-1 score and Macc (average of sensitivity and specificity) as performance metrics. Our systems achieved relative improvements of up to 11.84% in terms of MAcc, compared to state-of-the-art methods. Conclusion: The results demonstrate the effectiveness of the proposed learnable filterbank CNN architecture in achieving robustness towards sensor/domain variability in PCG signals. Significance: The proposed methods pave the way for deploying automated cardiac screening systems in diversified and underserved communities.
Cardiovascular (CV) diseases are the leading cause of death in the world, and auscultation is typically an essential part of a cardiovascular examination. The ability to diagnose a patient based on their heart sounds is a rather difficult skill to master. Thus, many approaches for automated heart auscultation have been explored. However, most of the previously proposed methods involve a segmentation step, the performance of which drops significantly for high pulse rates or noisy signals. In this work, we propose a novel segmentation-free heart sound classification method. Specifically, we apply discrete wavelet transform to denoise the signal, followed by feature extraction and feature reduction. Then, Support Vector Machines and Deep Neural Networks are utilised for classification. On the PASCAL heart sound dataset our approach showed superior performance compared to others, achieving 81% and 96% precision on normal and murmur classes, respectively. In addition, for the first time, the data were further explored under a user-independent setting, where the proposed method achieved 92% and 86% precision on normal and murmur, demonstrating the potential of enabling automatic murmur detection for practical use.
Cardiovascular diseases are the leading cause of deaths and severely threaten human health in daily life. On the one hand, there have been dramatically increasing demands from both the clinical practice and the smart home application for monitoring the heart status of subjects suffering from chronic cardiovascular diseases. On the other hand, experienced physicians who can perform an efficient auscultation are still lacking in terms of number. Automatic heart sound classification leveraging the power of advanced signal processing and machine learning technologies has shown encouraging results. Nevertheless, human hand-crafted features are expensive and time-consuming. To this end, we propose a novel deep representation learning method with an attention mechanism for heart sound classification. In this paradigm, high-level representations are learnt automatically from the recorded heart sound data. Particularly, a global attention pooling layer improves the performance of the learnt representations by estimating the contribution of each unit in feature maps. The Heart Sounds Shenzhen (HSS) corpus (170 subjects involved) is used to validate the proposed method. Experimental results validate that, our approach can achieve an unweighted average recall of 51.2% for classifying three categories of heart sounds, i. e., normal, mild, and moderate/severe annotated by cardiologists with the help of Echocardiography.
The diagnosis of heart diseases is a difficult task generally addressed by an appropriate examination of patients clinical data. Recently, the use of heart rate variability (HRV) analysis as well as of some machine learning algorithms, has proved to be a valuable support in the diagnosis process. However, till now, ischemic heart disease (IHD) has been diagnosed on the basis of Artificial Neural Networks (ANN) applied only to signs, symptoms and sequential ECG and coronary angiography, an invasive tool, while could be probably identified in a non-invasive way by using parameters extracted from HRV, a signal easily obtained from the ECG. In this study, 18 non-invasive features (age, gender, left ventricular ejection fraction and 15 obtained from HRV) of 243 subjects (156 normal subjects and 87 IHD patients) were used to train and validate a series of several ANN, different for number of input and hidden nodes. The best result was obtained using 7 input parameters and 7 hidden nodes with an accuracy of 98.9% and 82% for the training and validation dataset, respectively.
Left ventricular assist devices (LVADs) are surgically implanted mechanical pumps that improve survival rates for individuals with advanced heart failure. While life-saving, LVAD therapy is also associated with high morbidity, which can be partially attributed to the difficulties in identifying an LVAD complication before an adverse event occurs. Methods that are currently used to monitor for complications in LVAD-supported individuals require frequent clinical assessments at specialized LVAD centers. Remote analysis of digitally recorded precordial sounds has the potential to provide an inexpensive point-of-care diagnostic tool to assess both device function and the degree of cardiac support in LVAD recipients, facilitating real-time, remote monitoring for early detection of complications. To our knowledge, prior studies of precordial sounds in LVAD-supported individuals have analyzed LVAD noise rather than intrinsic heart sounds, due to a focus on detecting pump complications, and perhaps the obscuring of heart sounds by LVAD noise. In this letter, we describe an adaptive filtering method to remove sounds generated by the LVAD, making it possible to automatically isolate and analyze underlying heart sounds. We present preliminary results describing acoustic signatures of heart sounds extracted from in vivo data obtained from LVAD-supported individuals. These findings are significant as they provide proof-of-concept evidence for further exploration of heart sound analysis in LVAD-supported individuals to identify cardiac abnormalities and changes in LVAD support.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا