ترغب بنشر مسار تعليمي؟ اضغط هنا

Analysis of MRI Biomarkers for Brain Cancer Survival Prediction

160   0   0.0 ( 0 )
 نشر من قبل Subhashis Banerjee
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Prediction of Overall Survival (OS) of brain cancer patients from multi-modal MRI is a challenging field of research. Most of the existing literature on survival prediction is based on Radiomic features, which does not consider either non-biological factors or the functional neurological status of the patient(s). Besides, the selection of an appropriate cut-off for survival and the presence of censored data create further problems. Application of deep learning models for OS prediction is also limited due to the lack of large annotated publicly available datasets. In this scenario we analyse the potential of two novel neuroimaging feature families, extracted from brain parcellation atlases and spatial habitats, along with classical radiomic and geometric features; to study their combined predictive power for analysing overall survival. A cross validation strategy with grid search is proposed to simultaneously select and evaluate the most predictive feature subset based on its predictive power. A Cox Proportional Hazard (CoxPH) model is employed for univariate feature selection, followed by the prediction of patient-specific survival functions by three multivariate parsimonious models viz. Coxnet, Random survival forests (RSF) and Survival SVM (SSVM). The brain cancer MRI data used for this research was taken from two open-access collections TCGA-GBM and TCGA-LGG available from The Cancer Imaging Archive (TCIA). Corresponding survival data for each patient was downloaded from The Cancer Genome Atlas (TCGA). A high cross validation $C-index$ score of $0.82pm.10$ was achieved using RSF with the best $24$ selected features. Age was found to be the most important biological predictor. There were $9$, $6$, $6$ and $2$ features selected from the parcellation, habitat, radiomic and region-based feature groups respectively.



قيم البحث

اقرأ أيضاً

Data augmentation refers to a group of techniques whose goal is to battle limited amount of available data to improve model generalization and push sample distribution toward the true distribution. While different augmentation strategies and their co mbinations have been investigated for various computer vision tasks in the context of deep learning, a specific work in the domain of medical imaging is rare and to the best of our knowledge, there has been no dedicated work on exploring the effects of various augmentation methods on the performance of deep learning models in prostate cancer detection. In this work, we have statically applied five most frequently used augmentation techniques (random rotation, horizontal flip, vertical flip, random crop, and translation) to prostate Diffusion-weighted Magnetic Resonance Imaging training dataset of 217 patients separately and evaluated the effect of each method on the accuracy of prostate cancer detection. The augmentation algorithms were applied independently to each data channel and a shallow as well as a deep Convolutional Neural Network (CNN) were trained on the five augmented sets separately. We used Area Under Receiver Operating Characteristic (ROC) curve (AUC) to evaluate the performance of the trained CNNs on a separate test set of 95 patients, using a validation set of 102 patients for finetuning. The shallow network outperformed the deep network with the best 2D slice-based AUC of 0.85 obtained by the rotation method.
Deep learning shows high potential for many medical image analysis tasks. Neural networks can work with full-size data without extensive preprocessing and feature generation and, thus, information loss. Recent work has shown that the morphological di fference in specific brain regions can be found on MRI with the means of Convolution Neural Networks (CNN). However, interpretation of the existing models is based on a region of interest and can not be extended to voxel-wise image interpretation on a whole image. In the current work, we consider the classification task on a large-scale open-source dataset of young healthy subjects -- an exploration of brain differences between men and women. In this paper, we extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans. We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods: Meaningful Perturbations, Grad CAM and Guided Backpropagation, and contribute with the open-source library.
Deriving interpretable prognostic features from deep-learning-based prognostic histopathology models remains a challenge. In this study, we developed a deep learning system (DLS) for predicting disease specific survival for stage II and III colorecta l cancer using 3,652 cases (27,300 slides). When evaluated on two validation datasets containing 1,239 cases (9,340 slides) and 738 cases (7,140 slides) respectively, the DLS achieved a 5-year disease-specific survival AUC of 0.70 (95%CI 0.66-0.73) and 0.69 (95%CI 0.64-0.72), and added significant predictive value to a set of 9 clinicopathologic features. To interpret the DLS, we explored the ability of different human-interpretable features to explain the variance in DLS scores. We observed that clinicopathologic features such as T-category, N-category, and grade explained a small fraction of the variance in DLS scores (R2=18% in both validation sets). Next, we generated human-interpretable histologic features by clustering embeddings from a deep-learning based image-similarity model and showed that they explain the majority of the variance (R2 of 73% to 80%). Furthermore, the clustering-derived feature most strongly associated with high DLS scores was also highly prognostic in isolation. With a distinct visual appearance (poorly differentiated tumor cell clusters adjacent to adipose tissue), this feature was identified by annotators with 87.0-95.5% accuracy. Our approach can be used to explain predictions from a prognostic deep learning model and uncover potentially-novel prognostic features that can be reliably identified by people for future validation studies.
Cox proportional hazard model (CPH) is commonly used in clinical research for survival analysis. In quantitative medical imaging (radiomics) studies, CPH plays an important role in feature reduction and modeling. However, the underlying linear assump tion of CPH model limits the prognostic performance. In addition, the multicollinearity of radiomic features and multiple testing problem further impedes the CPH models performance. In this work, using transfer learning, a convolutional neural network (CNN) based survival model was built and tested on preoperative CT images of resectable Pancreatic Ductal Adenocarcinoma (PDAC) patients. The proposed CNN-based survival model outperformed the traditional CPH-based radiomics approach in terms of concordance index by 22%, providing a better fit for patients survival patterns. The proposed CNN-based survival model outperforms CPH-based radiomics pipeline in PDAC prognosis. This approach offers a better fit for survival patterns based on CT images and overcomes the limitations of conventional survival models.
Recent development of quantitative myocardial blood flow (MBF) mapping allows direct evaluation of absolute myocardial perfusion, by computing pixel-wise flow maps. Clinical studies suggest quantitative evaluation would be more desirable for objectiv ity and efficiency. Objective assessment can be further facilitated by segmenting the myocardium and automatically generating reports following the AHA model. This will free user interaction for analysis and lead to a one-click solution to improve workflow. This paper proposes a deep neural network based computational workflow for inline myocardial perfusion analysis. Adenosine stress and rest perfusion scans were acquired from three hospitals. Training set included N=1,825 perfusion series from 1,034 patients. Independent test set included 200 scans from 105 patients. Data were consecutively acquired at each site. A convolution neural net (CNN) model was trained to provide segmentation for LV cavity, myocardium and right ventricular by processing incoming 2D+T perfusion Gd series. Model outputs were compared to manual ground-truth for accuracy of segmentation and flow measures derived on global and per-sector basis. The trained models were integrated onto MR scanners for effective inference. Segmentation accuracy and myocardial flow measures were compared between CNN models and manual ground-truth. The mean Dice ratio of CNN derived myocardium was 0.93 +/- 0.04. Both global flow and per-sector values showed no significant difference, compared to manual results. The AHA 16 segment model was automatically generated and reported on the MR scanner. As a result, the fully automated analysis of perfusion flow mapping was achieved. This solution was integrated on the MR scanner, enabling one-click analysis and reporting of myocardial blood flow.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا