No Arabic abstract
Objectives: Glioblastomas are the most aggressive brain and central nervous system (CNS) tumors with poor prognosis in adults. The purpose of this study is to develop a machine-learning based classification method using radio-mic features of multi-parametric MRI to classify high-grade gliomas (HGG) and low-grade gliomas (LGG). Methods: Multi-parametric MRI of 80 patients, 40 HGG and 40 LGG, with gliomas from the MICCAI BRATs 2015 training database were used in this study. Each patients T1, contrast-enhanced T1, T2, and Fluid Attenuated Inversion Recovery (FLAIR) MRIs as well as the tumor contours were provided in the database. Using the given contours, radiomic features from all four multi-parametric MRIs were extracted. Of these features, a feature selection process using two-sample T-test and least absolute shrinkage, selection operator (LASSO), and a feature correlation threshold was applied to various combinations of T1, contrast-enhanced T1, T2, and FLAIR MRIs separately. These selected features were then used to train, test, and cross-validate a random forest to differentiate HGG and LGG. Finally, the classification accuracy and area under the curve (AUC) were used to evaluate the classification method. Results: Optimized parameters showed that on average, the overall accuracy of our classification method was 0.913 or 73 out of 80 correct classifications, 36/40 for HGG and 37/40 for LGG, with an AUC of 0.956 based on the combination with FLAIR, T1, T1c and T2 MRIs. Conclusion: This study shows that radio-mic features derived from multi-parametric MRI could be used to accurately classify high and lower grade gliomas. The radio-mic features from multi-parametric MRI in combination with even more advanced machine learning methods may further elucidate the underlying tumor biology and response to therapy.
Diffuse low grade gliomas are slowly growing tumors that always recur after treatment. In this paper, we revisit the modeling of the tumor radius evolution before and after the radiotherapy process and propose a novel model that is simple, yet biologically motivated, and that remedies some shortcomings of previously proposed ones. We confront it with clinical data consisting in time-series of tumor radius for 43 patient records, using a stochastic optimization technique and obtain very good fits in all the cases. Since our model describes the evolution of the tumor from the very first glioma cell, it gives access to the possible age of the tumor. Using the technique of profile-likelihood to extract all the information from the data, we build confidence intervals for the tumor birth age and confirm the fact that low-grade glioma seem to appear in the late teenage years. Moreover, an approximate analytical expression of the temporal evolution of the tumor radius allows us to explain the correlations observed in the data.
In higher educational institutes, many students have to struggle hard to complete different courses since there is no dedicated support offered to students who need special attention in the registered courses. Machine learning techniques can be utilized for students grades prediction in different courses. Such techniques would help students to improve their performance based on predicted grades and would enable instructors to identify such individuals who might need assistance in the courses. In this paper, we use Collaborative Filtering (CF), Matrix Factorization (MF), and Restricted Boltzmann Machines (RBM) techniques to systematically analyze a real-world data collected from Information Technology University (ITU), Lahore, Pakistan. We evaluate the academic performance of ITU students who got admission in the bachelors degree program in ITUs Electrical Engineering department. The RBM technique is found to be better than the other techniques used in predicting the students performance in the particular course.
Identifying prostate cancer patients that are harboring aggressive forms of prostate cancer remains a significant clinical challenge. To shed light on this problem, we develop an approach based on multispectral deep-ultraviolet (UV) microscopy that provides novel quantitative insight into the aggressiveness and grade of this disease. First, we find that UV spectral signatures from endogenous molecules give rise to a phenotypical continuum that differentiates critical structures of thin tissue sections with subcellular spatial resolution, including nuclei, cytoplasm, stroma, basal cells, nerves, and inflammation. Further, we show that this phenotypical continuum can be applied as a surrogate biomarker of prostate cancer malignancy, where patients with the most aggressive tumors show a ubiquitous glandular phenotypical shift. Lastly, we adapt a two-part Cycle-consistent Generative Adversarial Network to translate the label-free deep-UV images into virtual hematoxylin and eosin (H&E) stained images. Agreement between the virtual H&E images and the gold standard H&E-stained tissue sections is evaluated by a panel of pathologists who find that the two modalities are in excellent agreement. This work has significant implications towards improving our ability to objectively quantify prostate cancer grade and aggressiveness, thus improving the management and clinical outcomes of prostate cancer patients. This same approach can also be applied broadly in other tumor types to achieve low-cost, stain-free, quantitative histopathological analysis.
Artificial intelligence (AI) classification holds promise as a novel and affordable screening tool for clinical management of ocular diseases. Rural and underserved areas, which suffer from lack of access to experienced ophthalmologists may particularly benefit from this technology. Quantitative optical coherence tomography angiography (OCTA) imaging provides excellent capability to identify subtle vascular distortions, which are useful for classifying retinovascular diseases. However, application of AI for differentiation and classification of multiple eye diseases is not yet established. In this study, we demonstrate supervised machine learning based multi-task OCTA classification. We sought 1) to differentiate normal from diseased ocular conditions, 2) to differentiate different ocular disease conditions from each other, and 3) to stage the severity of each ocular condition. Quantitative OCTA features, including blood vessel tortuosity (BVT), blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour irregularity (FAZ-CI) were fully automatically extracted from the OCTA images. A stepwise backward elimination approach was employed to identify sensitive OCTA features and optimal-feature-combinations for the multi-task classification. For proof-of-concept demonstration, diabetic retinopathy (DR) and sickle cell retinopathy (SCR) were used to validate the supervised machine leaning classifier. The presented AI classification methodology is applicable and can be readily extended to other ocular diseases, holding promise to enable a mass-screening platform for clinical deployment and telemedicine.
As bone and air produce weak signals with conventional MR sequences, segmentation of these tissues particularly difficult in MRI. We propose to integrate patch-based anatomical signatures and an auto-context model into a machine learning framework to iteratively segment MRI into air, bone and soft tissue. The proposed semantic classification random forest (SCRF) method consists of a training stage and a segmentation stage. During training stage, patch-based anatomical features were extracted from registered MRI-CT training images, and the most informative features were identified to train a series of classification forests with auto-context model. During segmentation stage, we extracted selected features from MRI and fed them into the well-trained forests for MRI segmentation. The DSC for air, bone and soft tissue obtained with proposed SCRF were 0.976, 0.819 and 0.932, compared to 0.916, 0.673 and 0.830 with RF, 0.942, 0.791 and 0.917 with U-Net. SCRF also demonstrated superior segmentation performances for sensitivity and specificity over RF and U-Net for all three structure types. The proposed segmentation technique could be a useful tool to segment bone, air and soft tissue, and have the potential to be applied to attenuation correction of PET/MRI system, MRI-only radiation treatment planning and MR-guided focused ultrasound surgery.