No Arabic abstract
We propose a statistical learning model for classifying cognitive processes based on distributed patterns of neural activation in the brain, acquired via functional magnetic resonance imaging (fMRI). In the proposed learning method, local meshes are formed around each voxel. The distance between voxels in the mesh is determined by using a functional neighbourhood concept. In order to define the functional neighbourhood, the similarities between the time series recorded for voxels are measured and functional connectivity matrices are constructed. Then, the local mesh for each voxel is formed by including the functionally closest neighbouring voxels in the mesh. The relationship between the voxels within a mesh is estimated by using a linear regression model. These relationship vectors, called Functional Connectivity aware Local Relational Features (FC-LRF) are then used to train a statistical learning machine. The proposed method was tested on a recognition memory experiment, including data pertaining to encoding and retrieval of words belonging to ten different semantic categories. Two popular classifiers, namely k-nearest neighbour (k-nn) and Support Vector Machine (SVM), are trained in order to predict the semantic category of the item being retrieved, based on activation patterns during encoding. The classification performance of the Functional Mesh Learning model, which range in 62%-71% is superior to the classical multi-voxel pattern analysis (MVPA) methods, which range in 40%-48%, for ten semantic categories.
We represent the sequence of fMRI (Functional Magnetic Resonance Imaging) brain volumes recorded during a cognitive stimulus by a graph which consists of a set of local meshes. The corresponding cognitive process, encoded in the brain, is then represented by these meshes each of which is estimated assuming a linear relationship among the voxel time series in a predefined locality. First, we define the concept of locality in two neighborhood systems, namely, the spatial and functional neighborhoods. Then, we construct spatially and functionally local meshes around each voxel, called seed voxel, by connecting it either to its spatial or functional p-nearest neighbors. The mesh formed around a voxel is a directed sub-graph with a star topology, where the direction of the edges is taken towards the seed voxel at the center of the mesh. We represent the time series recorded at each seed voxel in terms of linear combination of the time series of its p-nearest neighbors in the mesh. The relationships between a seed voxel and its neighbors are represented by the edge weights of each mesh, and are estimated by solving a linear regression equation. The estimated mesh edge weights lead to a better representation of information in the brain for encoding and decoding of the cognitive tasks. We test our model on a visual object recognition and emotional memory retrieval experiments using Support Vector Machines that are trained using the mesh edge weights as features. In the experimental analysis, we observe that the edge weights of the spatial and functional meshes perform better than the state-of-the-art brain decoding models.
By focusing on melancholic features with biological homogeneity, this study aimed to identify a small number of critical functional connections (FCs) that were specific only to the melancholic type of MDD. On the resting-state fMRI data, classifiers were developed to differentiate MDD patients from healthy controls (HCs). The classification accuracy was improved from 50 % (93 MDD and 93 HCs) to 70% (66 melancholic MDD and 66 HCs), when we specifically focused on the melancholic MDD with moderate or severer level of depressive symptoms. It showed 65% accuracy for the independent validation cohort. The biomarker score distribution showed improvements with escitalopram treatments, and also showed significant correlations with depression symptom scores. This classifier was specific to melancholic MDD, and it did not generalize in other mental disorders including autism spectrum disorder (ASD, 54% accuracy) and schizophrenia spectrum disorder (SSD, 45% accuracy). Among the identified 12 FCs from 9,316 FCs between whole brain anatomical node pairs, the left DLPFC / IFG region, which has most commonly been targeted for depression treatments, and its functional connections between Precuneus / PCC, and between right DLPFC / SMA areas had the highest contributions. Given the heterogeneity of the MDD, focusing on the melancholic features is the key to achieve high classification accuracy. The identified FCs specifically predicted the melancholic MDD and associated with subjective depressive symptoms. These results suggested key FCs of melancholic depression, and open doors to novel treatments targeting these regions in the future.
The goal of the present study is to identify autism using machine learning techniques and resting-state brain imaging data, leveraging the temporal variability of the functional connections (FC) as the only information. We estimated and compared the FC variability across brain regions between typical, healthy subjects and autistic population by analyzing brain imaging data from a world-wide multi-site database known as ABIDE (Autism Brain Imaging Data Exchange). Our analysis revealed that patients diagnosed with autism spectrum disorder (ASD) show increased FC variability in several brain regions that are associated with low FC variability in the typical brain. We then used the enhanced FC variability of brain regions as features for training machine learning models for ASD classification and achieved 65% accuracy in identification of ASD versus control subjects within the dataset. We also used node strength estimated from number of functional connections per node averaged over the whole scan as features for ASD classification.The results reveal that the dynamic FC measures outperform or are comparable with the static FC measures in predicting ASD.
Brain image analysis has advanced substantially in recent years with the proliferation of neuroimaging datasets acquired at different resolutions. While research on brain image super-resolution has undergone a rapid development in the recent years, brain graph super-resolution is still poorly investigated because of the complex nature of non-Euclidean graph data. In this paper, we propose the first-ever deep graph super-resolution (GSR) framework that attempts to automatically generate high-resolution (HR) brain graphs with N nodes (i.e., anatomical regions of interest (ROIs)) from low-resolution (LR) graphs with N nodes where N < N. First, we formalize our GSR problem as a node feature embedding learning task. Once the HR nodes embeddings are learned, the pairwise connectivity strength between brain ROIs can be derived through an aggregation rule based on a novel Graph U-Net architecture. While typically the Graph U-Net is a node-focused architecture where graph embedding depends mainly on node attributes, we propose a graph-focused architecture where the node feature embedding is based on the graph topology. Second, inspired by graph spectral theory, we break the symmetry of the U-Net architecture by super-resolving the low-resolution brain graph structure and node content with a GSR layer and two graph convolutional network layers to further learn the node embeddings in the HR graph. Third, to handle the domain shift between the ground-truth and the predicted HR brain graphs, we incorporate adversarial regularization to align their respective distributions. Our proposed AGSR-Net framework outperformed its variants for predicting high-resolution functional brain graphs from low-resolution ones. Our AGSR-Net code is available on GitHub at https://github.com/basiralab/AGSR-Net.
Neural sequence models are widely used to model time-series data. Equally ubiquitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top-B candidates - resulting in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this problem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a list of diverse outputs by optimizing for a diversity-augmented objective. We observe that our method finds better top-1 solutions by controlling for the exploration and exploitation of the search space - implying that DBS is a better search algorithm. Moreover, these gains are achieved with minimal computational or memory over- head as compared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation and visual question generation using both standard quantitative metrics and qualitative human studies. Further, we study the role of diversity for image-grounded language generation tasks as the complexity of the image changes. We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.