No Arabic abstract
Echocardiogram (echo) is the earliest and the primary tool for identifying regional wall motion abnormalities (RWMA) in order to diagnose myocardial infarction (MI) or commonly known as heart attack. This paper proposes a novel approach, Active Polynomials, which can accurately and robustly estimate the global motion of the Left Ventricular (LV) wall from any echo in a robust and accurate way. The proposed algorithm quantifies the true wall motion occurring in LV wall segments so as to assist cardiologists diagnose early signs of an acute MI. It further enables medical experts to gain an enhanced visualization capability of echo images through color-coded segments along with their maximum motion displacement plots helping them to better assess wall motion and LV Ejection-Fraction (LVEF). The outputs of the method can further help echo-technicians to assess and improve the quality of the echocardiogram recording. A major contribution of this study is the first public echo database collection composed by physicians at the Hamad Medical Corporation Hospital in Qatar. The so-called HMC-QU database will serve as the benchmark for the forthcoming relevant studies. The results over the HMC-QU dataset show that the proposed approach can achieve high accuracy, sensitivity and precision in MI detection even though the echo quality is quite poor, and the temporal resolution is low.
Myocardial infarction (MI), or commonly known as heart attack, is a life-threatening health problem worldwide from which 32.4 million people suffer each year. Early diagnosis and treatment of MI are crucial to prevent further heart tissue damages or death. The earliest and most reliable sign of ischemia is regional wall motion abnormality (RWMA) of the affected part of the ventricular muscle. Echocardiography can easily, inexpensively, and non-invasively exhibit the RWMA. In this article, we introduce a three-phase approach for early MI detection in low-quality echocardiography: 1) segmentation of the entire left ventricle (LV) wall using a state-of-the-art deep learning model, 2) analysis of the segmented LV wall by feature engineering, and 3) early MI detection. The main contributions of this study are highly accurate segmentation of the LV wall from low-quality echocardiography, pseudo labeling approach for ground-truth formation of the unannotated LV wall, and the first public echocardiographic dataset (HMC-QU)* for MI detection. Furthermore, the outputs of the proposed approach can significantly help cardiologists for a better assessment of the LV wall characteristics. The proposed approach has achieved 95.72% sensitivity and 99.58% specificity for the LV wall segmentation, and 85.97% sensitivity, 74.03% specificity, and 86.85% precision for MI detection on the HMC-QU dataset. *The benchmark HMC-QU dataset is publicly shared at the repository https://www.kaggle.com/aysendegerli/hmcqu-dataset
Cardiac imaging known as echocardiography is a non-invasive tool utilized to produce data including images and videos, which cardiologists use to diagnose cardiac abnormalities in general and myocardial infarction (MI) in particular. Echocardiography machines can deliver abundant amounts of data that need to be quickly analyzed by cardiologists to help them make a diagnosis and treat cardiac conditions. However, the acquired data quality varies depending on the acquisition conditions and the patients responsiveness to the setup instructions. These constraints are challenging to doctors especially when patients are facing MI and their lives are at stake. In this paper, we propose an innovative real-time end-to-end fully automated model based on convolutional neural networks (CNN) to detect MI depending on regional wall motion abnormalities (RWMA) of the left ventricle (LV) from videos produced by echocardiography. Our model is implemented as a pipeline consisting of a 2D CNN that performs data preprocessing by segmenting the LV chamber from the apical four-chamber (A4C) view, followed by a 3D CNN that performs a binary classification to detect if the segmented echocardiography shows signs of MI. We trained both CNNs on a dataset composed of 165 echocardiography videos each acquired from a distinct patient. The 2D CNN achieved an accuracy of 97.18% on data segmentation while the 3D CNN achieved 90.9% of accuracy, 100% of precision and 95% of recall on MI detection. Our results demonstrate that creating a fully automated system for MI detection is feasible and propitious.
Myocardial Velocity Mapping Cardiac MR (MVM-CMR) can be used to measure global and regional myocardial velocities with proved reproducibility. Accurate left ventricle delineation is a prerequisite for robust and reproducible myocardial velocity estimation. Conventional manual segmentation on this dataset can be time-consuming and subjective, and an effective fully automated delineation method is highly in demand. By leveraging recently proposed deep learning-based semantic segmentation approaches, in this study, we propose a novel fully automated framework incorporating a 3D-UNet backbone architecture with Embedded multichannel Attention mechanism and LSTM based Recurrent neural networks (RNN) for the MVM-CMR datasets (dubbed 3D-EAR segmentor). The proposed method also utilises the amalgamation of magnitude and phase images as input to realise an information fusion of this multichannel dataset and exploring the correlations of temporal frames via the embedded RNN. By comparing the baseline model of 3D-UNet and ablation studies with and without embedded attentive LSTM modules and various loss functions, we can demonstrate that the proposed model has outperformed the state-of-the-art baseline models with significant improvement.
Acute Lymphoblastic Leukemia (ALL) is a blood cell cancer characterized by numerous immature lymphocytes. Even though automation in ALL prognosis is an essential aspect of cancer diagnosis, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy demands experienced pathologists to carefully read the cell images, which is arduous, time-consuming, and often suffers inter-observer variations. This article has automated the ALL detection task from microscopic cell images, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of different deep CNNs to recommend a better ALL cell classifier. The weights for the ensemble candidate models are estimated from their corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We utilize the publicly available C-NMC-2019 ALL dataset to conduct all the comprehensive experiments. Our proposed weighted ensemble model, using the kappa values of the ensemble candidates as their weights, has outputted a weighted F1-score of 88.6 %, a balanced accuracy of 86.2 %, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed kappa value-based weighted ensemble yields a better result for the aimed task in this article, it can experiment in other domains of medical diagnostic applications.
Accurate detection of the myocardial infarction (MI) area is crucial for early diagnosis planning and follow-up management. In this study, we propose an end-to-end deep-learning algorithm framework (OF-RNN ) to accurately detect the MI area at the pixel level. Our OF-RNN consists of three different function layers: the heart localization layers, which can accurately and automatically crop the region-of-interest (ROI) sequences, including the left ventricle, using the whole cardiac magnetic resonance image sequences; the motion statistical layers, which are used to build a time-series architecture to capture two types of motion features (at the pixel-level) by integrating the local motion features generated by long short-term memory-recurrent neural networks and the global motion features generated by deep optical flows from the whole ROI sequence, which can effectively characterize myocardial physiologic function; and the fully connected discriminate layers, which use stacked auto-encoders to further learn these features, and they use a softmax classifier to build the correspondences from the motion features to the tissue identities (infarction or not) for each pixel. Through the seamless connection of each layer, our OF-RNN can obtain the area, position, and shape of the MI for each patient. Our proposed framework yielded an overall classification accuracy of 94.35% at the pixel level, from 114 clinical subjects. These results indicate the potential of our proposed method in aiding standardized MI assessments.