ترغب بنشر مسار تعليمي؟ اضغط هنا

Graph Embedding Using Infomax for ASD Classification and Brain Functional Difference Detection

84   0   0.0 ( 0 )
 نشر من قبل Xiaoxiao Li
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Significant progress has been made using fMRI to characterize the brain changes that occur in ASD, a complex neuro-developmental disorder. However, due to the high dimensionality and low signal-to-noise ratio of fMRI, embedding informative and robust brain regional fMRI representations for both graph-level classification and region-level functional difference detection tasks between ASD and healthy control (HC) groups is difficult. Here, we model the whole brain fMRI as a graph, which preserves geometrical and temporal information and use a Graph Neural Network (GNN) to learn from the graph-structured fMRI data. We investigate the potential of including mutual information (MI) loss (Infomax), which is an unsupervised term encouraging large MI of each nodal representation and its corresponding graph-level summarized representation to learn a better graph embedding. Specifically, this work developed a pipeline including a GNN encoder, a classifier and a discriminator, which forces the encoded nodal representations to both benefit classification and reveal the common nodal patterns in a graph. We simultaneously optimize graph-level classification loss and Infomax. We demonstrated that Infomax graph embedding improves classification performance as a regularization term. Furthermore, we found separable nodal representations of ASD and HC groups in prefrontal cortex, cingulate cortex, visual regions, and other social, emotional and execution related brain regions. In contrast with GNN with classification loss only, the proposed pipeline can facilitate training more robust ASD classification models. Moreover, the separable nodal representations can detect the functional differences between the two groups and contribute to revealing new ASD biomarkers.



قيم البحث

اقرأ أيضاً

Brain image analysis has advanced substantially in recent years with the proliferation of neuroimaging datasets acquired at different resolutions. While research on brain image super-resolution has undergone a rapid development in the recent years, b rain graph super-resolution is still poorly investigated because of the complex nature of non-Euclidean graph data. In this paper, we propose the first-ever deep graph super-resolution (GSR) framework that attempts to automatically generate high-resolution (HR) brain graphs with N nodes (i.e., anatomical regions of interest (ROIs)) from low-resolution (LR) graphs with N nodes where N < N. First, we formalize our GSR problem as a node feature embedding learning task. Once the HR nodes embeddings are learned, the pairwise connectivity strength between brain ROIs can be derived through an aggregation rule based on a novel Graph U-Net architecture. While typically the Graph U-Net is a node-focused architecture where graph embedding depends mainly on node attributes, we propose a graph-focused architecture where the node feature embedding is based on the graph topology. Second, inspired by graph spectral theory, we break the symmetry of the U-Net architecture by super-resolving the low-resolution brain graph structure and node content with a GSR layer and two graph convolutional network layers to further learn the node embeddings in the HR graph. Third, to handle the domain shift between the ground-truth and the predicted HR brain graphs, we incorporate adversarial regularization to align their respective distributions. Our proposed AGSR-Net framework outperformed its variants for predicting high-resolution functional brain graphs from low-resolution ones. Our AGSR-Net code is available on GitHub at https://github.com/basiralab/AGSR-Net.
Cancer is a complex disease that provides various types of information depending on the scale of observation. While most tumor diagnostics are performed by observing histopathological slides, radiology images should yield additional knowledge towards the efficacy of cancer diagnostics. This work investigates a deep learning method combining whole slide images and magnetic resonance images to classify tumors. In particular, our solution comprises a powerful, generic and modular architecture for whole slide image classification. Experiments are prospectively conducted on the 2020 Computational Precision Medicine challenge, in a 3-classes unbalanced classification task. We report cross-validation (resp. validation) balanced-accuracy, kappa and f1 of 0.913, 0.897 and 0.951 (resp. 0.91, 0.90 and 0.94). For research purposes, including reproducibility and direct performance comparisons, our finale submitted models are usable off-the-shelf in a Docker image available at https://hub.docker.com/repository/docker/marvinler/cpm_2020_marvinler.
We present a joint graph convolution-image convolution neural network as our submission to the Brain Tumor Segmentation (BraTS) 2021 challenge. We model each brain as a graph composed of distinct image regions, which is initially segmented by a graph neural network (GNN). Subsequently, the tumorous volume identified by the GNN is further refined by a simple (voxel) convolutional neural network (CNN), which produces the final segmentation. This approach captures both global brain feature interactions via the graphical representation and local image details through the use of convolutional filters. We find that the GNN component by itself can effectively identify and segment the brain tumors. The addition of the CNN further improves the median performance of the model by 2 percent across all metrics evaluated. On the validation set, our joint GNN-CNN model achieves mean Dice scores of 0.89, 0.81, 0.73 and mean Hausdorff distances (95th percentile) of 6.8, 12.6, 28.2mm on the whole tumor, core tumor, and enhancing tumor, respectively.
According to the World Health Organization, cancer is the second leading cause of death worldwide, responsible for over 9.5 million deaths in 2018 alone. Brain tumors count for one out of every four cancer deaths. Accurate and timely diagnosis of bra in tumors will lead to more effective treatments. To date, several image classification approaches have been proposed to aid diagnosis and treatment. We propose an encoder layer that uses post-max-pooling features for residual learning. Our approach shows promising results by improving the tumor classification accuracy in MR images using a limited medical image dataset. Experimental evaluations of this model on a dataset consisting of 3064 MR images show 95-98% accuracy, which is better than previous studies on this database.
Brain tumor is one of the leading causes of cancer-related death globally among children and adults. Precise classification of brain tumor grade (low-grade and high-grade glioma) at early stage plays a key role in successful prognosis and treatment p lanning. With recent advances in deep learning, Artificial Intelligence-enabled brain tumor grading systems can assist radiologists in the interpretation of medical images within seconds. The performance of deep learning techniques is, however, highly depended on the size of the annotated dataset. It is extremely challenging to label a large quantity of medical images given the complexity and volume of medical data. In this work, we propose a novel transfer learning based active learning framework to reduce the annotation cost while maintaining stability and robustness of the model performance for brain tumor classification. We employed a 2D slice-based approach to train and finetune our model on the Magnetic Resonance Imaging (MRI) training dataset of 203 patients and a validation dataset of 66 patients which was used as the baseline. With our proposed method, the model achieved Area Under Receiver Operating Characteristic (ROC) Curve (AUC) of 82.89% on a separate test dataset of 66 patients, which was 2.92% higher than the baseline AUC while saving at least 40% of labeling cost. In order to further examine the robustness of our method, we created a balanced dataset, which underwent the same procedure. The model achieved AUC of 82% compared with AUC of 78.48% for the baseline, which reassures the robustness and stability of our proposed transfer learning augmented with active learning framework while significantly reducing the size of training data.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا