No Arabic abstract
According to the World Health Organization, cancer is the second leading cause of death worldwide, responsible for over 9.5 million deaths in 2018 alone. Brain tumors count for one out of every four cancer deaths. Accurate and timely diagnosis of brain tumors will lead to more effective treatments. To date, several image classification approaches have been proposed to aid diagnosis and treatment. We propose an encoder layer that uses post-max-pooling features for residual learning. Our approach shows promising results by improving the tumor classification accuracy in MR images using a limited medical image dataset. Experimental evaluations of this model on a dataset consisting of 3064 MR images show 95-98% accuracy, which is better than previous studies on this database.
Cancer is a complex disease that provides various types of information depending on the scale of observation. While most tumor diagnostics are performed by observing histopathological slides, radiology images should yield additional knowledge towards the efficacy of cancer diagnostics. This work investigates a deep learning method combining whole slide images and magnetic resonance images to classify tumors. In particular, our solution comprises a powerful, generic and modular architecture for whole slide image classification. Experiments are prospectively conducted on the 2020 Computational Precision Medicine challenge, in a 3-classes unbalanced classification task. We report cross-validation (resp. validation) balanced-accuracy, kappa and f1 of 0.913, 0.897 and 0.951 (resp. 0.91, 0.90 and 0.94). For research purposes, including reproducibility and direct performance comparisons, our finale submitted models are usable off-the-shelf in a Docker image available at https://hub.docker.com/repository/docker/marvinler/cpm_2020_marvinler.
The performance of image classification methodsheavily relies on the high-quality annotations, which are noteasily affordable, particularly for medical data. To alleviate thislimitation, in this study, we propose a weakly supervised imageclassification method based on combination of hand-craftedfeatures. We hypothesize that integration of these hand-craftedfeatures alongside Long short-term memory (LSTM) classifiercan reduce the adverse effects of weak labels in classificationaccuracy. Our proposed algorithm is based on selecting theappropriate domain representations of the data in Wavelet andDiscrete Cosine Transform (DCT) domains. This informationis then fed into LSTM network to account for the sequentialnature of the data. The proposed efficient, low dimensionalfeatures exploit the power of shallow deep learning modelsto achieve higher performance with lower computational cost.In order to show efficacy of the proposed strategy, we haveexperimented classification of brain tumor grades and achievedthe state of the art performance with the resolution of 256 x 256. We also conducted a comprehensive set of experiments toanalyze the effect of each component on the performance.
Brain tumor segmentation plays an essential role in medical image analysis. In recent studies, deep convolution neural networks (DCNNs) are extremely powerful to tackle tumor segmentation tasks. We propose in this paper a novel training method that enhances the segmentation results by adding an additional classification branch to the network. The whole network was trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. On the BraTSs validation set, it achieved an average Dice score of 78.43%, 89.99%, and 84.22% respectively for the enhancing tumor, the whole tumor, and the tumor core.
Brain tumor is one of the leading causes of cancer-related death globally among children and adults. Precise classification of brain tumor grade (low-grade and high-grade glioma) at early stage plays a key role in successful prognosis and treatment planning. With recent advances in deep learning, Artificial Intelligence-enabled brain tumor grading systems can assist radiologists in the interpretation of medical images within seconds. The performance of deep learning techniques is, however, highly depended on the size of the annotated dataset. It is extremely challenging to label a large quantity of medical images given the complexity and volume of medical data. In this work, we propose a novel transfer learning based active learning framework to reduce the annotation cost while maintaining stability and robustness of the model performance for brain tumor classification. We employed a 2D slice-based approach to train and finetune our model on the Magnetic Resonance Imaging (MRI) training dataset of 203 patients and a validation dataset of 66 patients which was used as the baseline. With our proposed method, the model achieved Area Under Receiver Operating Characteristic (ROC) Curve (AUC) of 82.89% on a separate test dataset of 66 patients, which was 2.92% higher than the baseline AUC while saving at least 40% of labeling cost. In order to further examine the robustness of our method, we created a balanced dataset, which underwent the same procedure. The model achieved AUC of 82% compared with AUC of 78.48% for the baseline, which reassures the robustness and stability of our proposed transfer learning augmented with active learning framework while significantly reducing the size of training data.
Brain tumor is the most common and deadliest disease that can be found in all age groups. Generally, MRI modality is adopted for identifying and diagnosing tumors by the radiologists. The correct identification of tumor regions and its type can aid to diagnose tumors with the followup treatment plans. However, for any radiologist analysing such scans is a complex and time-consuming task. Motivated by the deep learning based computer-aided-diagnosis systems, this paper proposes multi-task attention guided encoder-decoder network (MAG-Net) to classify and segment the brain tumor regions using MRI images. The MAG-Net is trained and evaluated on the Figshare dataset that includes coronal, axial, and sagittal views with 3 types of tumors meningioma, glioma, and pituitary tumor. With exhaustive experimental trials the model achieved promising results as compared to existing state-of-the-art models, while having least number of training parameters among other state-of-the-art models.