ترغب بنشر مسار تعليمي؟ اضغط هنا

Automated Inline Analysis of Myocardial Perfusion MRI with Deep Learning

86   0   0.0 ( 0 )
 نشر من قبل Hui Xue PhD
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Recent development of quantitative myocardial blood flow (MBF) mapping allows direct evaluation of absolute myocardial perfusion, by computing pixel-wise flow maps. Clinical studies suggest quantitative evaluation would be more desirable for objectivity and efficiency. Objective assessment can be further facilitated by segmenting the myocardium and automatically generating reports following the AHA model. This will free user interaction for analysis and lead to a one-click solution to improve workflow. This paper proposes a deep neural network based computational workflow for inline myocardial perfusion analysis. Adenosine stress and rest perfusion scans were acquired from three hospitals. Training set included N=1,825 perfusion series from 1,034 patients. Independent test set included 200 scans from 105 patients. Data were consecutively acquired at each site. A convolution neural net (CNN) model was trained to provide segmentation for LV cavity, myocardium and right ventricular by processing incoming 2D+T perfusion Gd series. Model outputs were compared to manual ground-truth for accuracy of segmentation and flow measures derived on global and per-sector basis. The trained models were integrated onto MR scanners for effective inference. Segmentation accuracy and myocardial flow measures were compared between CNN models and manual ground-truth. The mean Dice ratio of CNN derived myocardium was 0.93 +/- 0.04. Both global flow and per-sector values showed no significant difference, compared to manual results. The AHA 16 segment model was automatically generated and reported on the MR scanner. As a result, the fully automated analysis of perfusion flow mapping was achieved. This solution was integrated on the MR scanner, enabling one-click analysis and reporting of myocardial blood flow.



قيم البحث

اقرأ أيضاً

Quantification of myocardial perfusion has the potential to improve detection of regional and global flow reduction. Significant effort has been made to automate the workflow, where one essential step is the arterial input function (AIF) extraction. Since failure here invalidates quantification, high accuracy is required. For this purpose, this study presents a robust AIF detection method using the convolutional neural net (CNN) model. CNN models were trained by assembling 25,027 scans (N=12,984 patients) from three hospitals, seven scanners. A test set of 5,721 scans (N=2,805 patients) evaluated model performance. The 2D+T AIF time series was inputted into CNN. Two variations were investigated: a) Two Classes (2CS) for background and foreground (LV mask); b) Three Classes (3CS) for background, foreground LV and RV. Final model was deployed on MR scanners via the Gadgetron InlineAI. Model loading time on MR scanner was ~340ms and applying it took ~180ms. The 3CS model successfully detect LV for 99.98% of all test cases (1 failed out of 5,721 cases). The mean Dice ratio for 3CS was 0.87+/-0.08 with 92.0% of all test cases having Dice ratio >0.75, while the 2CS model gave lower Dice of 0.82+/-0.22 (P<1e-5). Extracted AIF signals using CNN were further compared to manual ground-truth for foot-time, peak-time, first-pass duration, peak value and area-under-curve. No significant differences were found for all features (P>0.2). This study proposed, validated, and deployed a robust CNN solution to detect the LV for the extraction of the AIF signal used in fully automated perfusion flow mapping. A very large data cohort was assembled and resulting models were deployed to MR scanners for fully inline AI in clinical hospitals.
Physicians use biopsies to distinguish between different but histologically similar enteropathies. The range of syndromes and pathologies that could cause different gastrointestinal conditions makes this a difficult problem. Recently, deep learning h as been used successfully in helping diagnose cancerous tissues in histopathological images. These successes motivated the research presented in this paper, which describes a deep learning approach that distinguishes between Celiac Disease (CD) and Environmental Enteropathy (EE) and normal tissue from digitized duodenal biopsies. Experimental results show accuracies of over 90% for this approach. We also look into interpreting the neural network model using Gradient-weighted Class Activation Mappings and filter activations on input images to understand the visual explanations for the decisions made by the model.
Current neuroimaging techniques provide paths to investigate the structure and function of the brain in vivo and have made great advances in understanding Alzheimers disease (AD). However, the group-level analyses prevalently used for investigation a nd understanding of the disease are not applicable for diagnosis of individuals. More recently, deep learning, which can efficiently analyze large-scale complex patterns in 3D brain images, has helped pave the way for computer-aided individual diagnosis by providing accurate and automated disease classification. Great progress has been made in classifying AD with deep learning models developed upon increasingly available structural MRI data. The lack of scale-matched functional neuroimaging data prevents such models from being further improved by observing functional changes in pathophysiology. Here we propose a potential solution by first learning a structural-to-functional transformation in brain MRI, and further synthesizing spatially matched functional images from large-scale structural scans. We evaluated our approach by building computational models to discriminate patients with AD from healthy normal subjects and demonstrated a performance boost after combining the structural and synthesized functional brain images into the same model. Furthermore, our regional analyses identified the temporal lobe to be the most predictive structural-region and the parieto-occipital lobe to be the most predictive functional-region of our model, which are both in concordance with previous group-level neuroimaging findings. Together, we demonstrate the potential of deep learning with large-scale structural and synthesized functional MRI to impact AD classification and to identify ADs neuroimaging signatures.
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays an important role in diagnosis and grading of brain tumor. Although manual DCE biomarker extraction algorithms boost the diagnostic yield of DCE-MRI by providing quantitative inform ation on tumor prognosis and prediction, they are time-consuming and prone to human error. In this paper, we propose a fully-automated, end-to-end system for DCE-MRI analysis of brain tumors. Our deep learning-powered technique does not require any user interaction, it yields reproducible results, and it is rigorously validated against benchmark (BraTS17 for tumor segmentation, and a test dataset released by the Quantitative Imaging Biomarkers Alliance for the contrast-concentration fitting) and clinical (44 low-grade glioma patients) data. Also, we introduce a cubic model of the vascular input function used for pharmacokinetic modeling which significantly decreases the fitting error when compared with the state of the art, alongside a real-time algorithm for determination of the vascular input region. An extensive experimental study, backed up with statistical tests, showed that our system delivers state-of-the-art results (in terms of segmentation accuracy and contrast-concentration fitting) while requiring less than 3 minutes to process an entire input DCE-MRI study using a single GPU.
Prediction of Overall Survival (OS) of brain cancer patients from multi-modal MRI is a challenging field of research. Most of the existing literature on survival prediction is based on Radiomic features, which does not consider either non-biological factors or the functional neurological status of the patient(s). Besides, the selection of an appropriate cut-off for survival and the presence of censored data create further problems. Application of deep learning models for OS prediction is also limited due to the lack of large annotated publicly available datasets. In this scenario we analyse the potential of two novel neuroimaging feature families, extracted from brain parcellation atlases and spatial habitats, along with classical radiomic and geometric features; to study their combined predictive power for analysing overall survival. A cross validation strategy with grid search is proposed to simultaneously select and evaluate the most predictive feature subset based on its predictive power. A Cox Proportional Hazard (CoxPH) model is employed for univariate feature selection, followed by the prediction of patient-specific survival functions by three multivariate parsimonious models viz. Coxnet, Random survival forests (RSF) and Survival SVM (SSVM). The brain cancer MRI data used for this research was taken from two open-access collections TCGA-GBM and TCGA-LGG available from The Cancer Imaging Archive (TCIA). Corresponding survival data for each patient was downloaded from The Cancer Genome Atlas (TCGA). A high cross validation $C-index$ score of $0.82pm.10$ was achieved using RSF with the best $24$ selected features. Age was found to be the most important biological predictor. There were $9$, $6$, $6$ and $2$ features selected from the parcellation, habitat, radiomic and region-based feature groups respectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا