ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep learning can differentiate IDH-mutant from IDH-wild type GBM

47   0   0.0 ( 0 )
 نشر من قبل Luca Pasquini
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Background: Distinction of IDH mutant and wildtype GBMs is challenging on MRI, since conventional imaging shows considerable overlap. While few studies employed deep-learning in a mixed low/high grade glioma population, a GBM-specific model is still lacking in the literature. Our objective was to develop a deep-learning model for IDH prediction in GBM by using Convoluted Neural Networks (CNN) on multiparametric MRI. Methods: We included 100 adult patients with pathologically proven GBM and IDH testing. MRI data included: morphologic sequences, rCBV and ADC maps. Tumor area was obtained by a bounding box function on the axial slice with widest tumor extension on T2 images and was projected on every sequence. Data was split into training and test (80:20) sets. A 4 block 2D - CNN architecture was implemented for IDH prediction on every MRI sequence. IDH mutation probability was calculated with softmax activation function from the last dense layer. Highest performance was calculated accounting for model accuracy and categorical cross-entropy loss (CCEL) in the test cohort. Results: Our model achieved the following performance: T1 (accuracy 77%, CCEL 1.4), T2 (accuracy 67%, CCEL 2.41), FLAIR (accuracy 77%, CCEL 1.98), MPRAGE (accuracy 66%, CCEL 2.55), rCBV (accuracy 83%, CCEL 0.64). ADC achieved lower performance. Conclusion: We built a GBM-tailored deep-learning model for IDH mutation prediction, achieving accuracy of 83% with rCBV maps. High predictivity of perfusion images may reflect the known correlation between IDH, hypoxia inducible factor (HIF) and neoangiogenesis. This model may set a path for non-invasive evaluation of IDH mutation in GBM.



قيم البحث

اقرأ أيضاً

74 - Xuan Guo , Shichao Feng 2020
Metaproteomics are becoming widely used in microbiome research for gaining insights into the functional state of the microbial community. Current metaproteomics studies are generally based on high-throughput tandem mass spectrometry (MS/MS) coupled w ith liquid chromatography. The identification of peptides and proteins from MS data involves the computational procedure of searching MS/MS spectra against a predefined protein sequence database and assigning top-scored peptides to spectra. Existing computational tools are still far from being able to extract all the information out of large MS/MS datasets acquired from metaproteome samples. In this paper, we proposed a deep-learning-based algorithm, called DeepFilter, for improving the rate of confident peptide identifications from a collection of tandem mass spectra. Compared with other post-processing tools, including Percolator, Q-ranker, PeptideProphet, and Iprophet, DeepFilter identified 20% and 10% more peptide-spectrum-matches and proteins, respectively, on marine microbial and soil microbial metaproteome samples with false discovery rate at 1%.
Cell division timing is critical for cell fate specification and morphogenesis during embryogenesis. How division timings are regulated among cells during development is poorly understood. Here we focus on the comparison of asynchrony of division bet ween sister cells (ADS) between wild-type and mutant individuals of Caenorhabditis elegans. Since the replicate number of mutant individuals of each mutated gene, usually one, is far smaller than that of wild-type, direct comparison of two distributions of ADS between wild-type and mutant type, such as Kolmogorov- Smirnov test, is not feasible. On the other hand, we find that sometimes ADS is correlated with the life span of corresponding mother cell in wild-type. Hence, we apply a semiparametric Bayesian quantile regression method to estimate the 95% confidence interval curve of ADS with respect to life span of mother cell of wild-type individuals. Then, mutant-type ADSs outside the corresponding confidence interval are selected out as abnormal one with a significance level of 0.05. Simulation study demonstrates the accuracy of our method and Gene Enrichment Analysis validates the results of real data sets.
93 - David Le , Minhaj Alam , Cham Yao 2019
Purpose: To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy (DR). Methods: A deep learning convolutional neural network (CNN) architecture VGG16 was employed for this s tudy. A transfer learning process was implemented to re-train the CNN for robust OCTA classification. In order to demonstrate the feasibility of using this method for artificial intelligence (AI) screening of DR in clinical environments, the re-trained CNN was incorporated into a custom developed GUI platform which can be readily operated by ophthalmic personnel. Results: With last nine layers re-trained, CNN architecture achieved the best performance for automated OCTA classification. The overall accuracy of the re-trained classifier for differentiating healthy, NoDR, and NPDR was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR and DR were 0.97, 0.98 and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusion: With a transfer leaning process to adopt the early layers for simple feature analysis and to re-train the upper layers for fine feature analysis, the CNN architecture VGG16 can be used for robust OCTA classification of healthy, NoDR, and NPDR eyes. Translational Relevance: OCTA can capture microvascular changes in early DR. A transfer learning process enables robust implementation of convolutional neural network (CNN) for automated OCTA classification of DR.
We present a learning-based method for extracting whistles of toothed whales (Odontoceti) in hydrophone recordings. Our method represents audio signals as time-frequency spectrograms and decomposes each spectrogram into a set of time-frequency patche s. A deep neural network learns archetypical patterns (e.g., crossings, frequency modulated sweeps) from the spectrogram patches and predicts time-frequency peaks that are associated with whistles. We also developed a comprehensive method to synthesize training samples from background environments and train the network with minimal human annotation effort. We applied the proposed learn-from-synthesis method to a subset of the public Detection, Classification, Localization, and Density Estimation (DCLDE) 2011 workshop data to extract whistle confidence maps, which we then processed with an existing contour extractor to produce whistle annotations. The F1-score of our best synthesis method was 0.158 greater than our baseline whistle extraction algorithm (~25% improvement) when applied to common dolphin (Delphinus spp.) and bottlenose dolphin (Tursiops truncatus) whistles.
Artificial intelligence (AI) classification holds promise as a novel and affordable screening tool for clinical management of ocular diseases. Rural and underserved areas, which suffer from lack of access to experienced ophthalmologists may particula rly benefit from this technology. Quantitative optical coherence tomography angiography (OCTA) imaging provides excellent capability to identify subtle vascular distortions, which are useful for classifying retinovascular diseases. However, application of AI for differentiation and classification of multiple eye diseases is not yet established. In this study, we demonstrate supervised machine learning based multi-task OCTA classification. We sought 1) to differentiate normal from diseased ocular conditions, 2) to differentiate different ocular disease conditions from each other, and 3) to stage the severity of each ocular condition. Quantitative OCTA features, including blood vessel tortuosity (BVT), blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour irregularity (FAZ-CI) were fully automatically extracted from the OCTA images. A stepwise backward elimination approach was employed to identify sensitive OCTA features and optimal-feature-combinations for the multi-task classification. For proof-of-concept demonstration, diabetic retinopathy (DR) and sickle cell retinopathy (SCR) were used to validate the supervised machine leaning classifier. The presented AI classification methodology is applicable and can be readily extended to other ocular diseases, holding promise to enable a mass-screening platform for clinical deployment and telemedicine.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا