No Arabic abstract
Sophisticated visualization tools are essential for the presentation and exploration of human neuroimaging data. While two-dimensional orthogonal views of neuroimaging data are conventionally used to display activity and statistical analysis, three-dimensional (3D) representation is useful for showing the spatial distribution of a functional network, as well as its temporal evolution. For these purposes, there is currently no open-source, 3D neuroimaging tool that can simultaneously visualize desired combinations of MRI, CT, EEG, MEG, fMRI, PET, and intracranial EEG (i.e., ECoG, depth electrodes, and DBS). Here we present the Multi-Modal Visualization Tool (MMVT), which is designed for researchers to interact with their neuroimaging functional and anatomical data through simultaneous visualization of these existing imaging modalities. MMVT contains two separate modules: The first is an add-on to the open-source, 3D-rendering program Blender. It is an interactive graphical interface that enables users to simultaneously visualize multi-modality functional and statistical data on cortical and subcortical surfaces as well as MEEG sensors and intracranial electrodes. This tool also enables highly accurate 3D visualization of neuroanatomy, including the location of invasive electrodes relative to brain structures. The second module includes complete stand-alone pre-processing pipelines, from raw data to statistical maps. Each of the modules and module features can be integrated, separate from the tool, into existing data pipelines. This gives the tool a distinct advantage in both clinical and research domains as each has highly specialized visual and processing needs. MMVT leverages open-source software to build a comprehensive tool for data visualization and exploration.
Starting from childhood, the human brain restructures and rewires throughout life. Characterizing such complex brain development requires effective analysis of longitudinal and multi-modal neuroimaging data. Here, we propose such an analysis approach named Longitudinal Correlation Analysis (LCA). LCA couples the data of two modalities by first reducing the input from each modality to a latent representation based on autoencoders. A self-supervised strategy then relates the two latent spaces by jointly disentangling two directions, one in each space, such that the longitudinal changes in latent representations along those directions are maximally correlated between modalities. We applied LCA to analyze the longitudinal T1-weighted and diffusion-weighted MRIs of 679 youths from the National Consortium on Alcohol and Neurodevelopment in Adolescence. Unlike existing approaches that focus on either cross-sectional or single-modal modeling, LCA successfully unraveled coupled macrostructural and microstructural brain development from morphological and diffusivity features extracted from the data. A retesting of LCA on raw 3D image volumes of those subjects successfully replicated the findings from the feature-based analysis. Lastly, the developmental effects revealed by LCA were inline with the current understanding of maturational patterns of the adolescent brain.
Neuroimaging data analysis often involves emph{a-priori} selection of data features to study the underlying neural activity. Since this could lead to sub-optimal feature selection and thereby prevent the detection of subtle patterns in neural activity, data-driven methods have recently gained popularity for optimizing neuroimaging data analysis pipelines and thereby, improving our understanding of neural mechanisms. In this context, we developed a deep convolutional architecture that can identify discriminating patterns in neuroimaging data and applied it to electroencephalography (EEG) recordings collected from 25 subjects performing a hand motor task before and after a rest period or a bout of exercise. The deep network was trained to classify subjects into exercise and control groups based on differences in their EEG signals. Subsequently, we developed a novel method termed the cue-combination for Class Activation Map (ccCAM), which enabled us to identify discriminating spatio-temporal features within definite frequency bands (23--33 Hz) and assess the effects of exercise on the brain. Additionally, the proposed architecture allowed the visualization of the differences in the propagation of underlying neural activity across the cortex between the two groups, for the first time in our knowledge. Our results demonstrate the feasibility of using deep network architectures for neuroimaging analysis in different contexts such as, for the identification of robust brain biomarkers to better characterize and potentially treat neurological disorders.
Local field potentials (LFPs) sampled with extracellular electrodes are frequently used as a measure of population neuronal activity. However, relating such measurements to underlying neuronal behaviour and connectivity is non-trivial. To help study this link, we developed the Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX). We first identified a reduced neuron model that retained the spatial and frequency filtering characteristics of extracellular potentials from neocortical neurons. We then developed VERTEX as an easy-to-use Matlab tool for simulating LFPs from large populations (>100 000 neurons). A VERTEX-based simulation successfully reproduced features of the LFPs from an in vitro multi-electrode array recording of macaque neocortical tissue. Our model, with virtual electrodes placed anywhere in 3D, allows direct comparisons with the in vitro recording setup. We envisage that VERTEX will stimulate experimentalists, clinicians, and computational neuroscientists to use models to understand the mechanisms underlying measured brain dynamics in health and disease.
Operating system (OS) updates introduce numerical perturbations that impact the reproducibility of computational pipelines. In neuroimaging, this has important practical implications on the validity of computational results, particularly when obtained in systems such as high-performance computing clusters where the experimenter does not control software updates. We present a framework to reproduce the variability induced by OS updates in controlled conditions. We hypothesize that OS updates impact computational pipelines mainly through numerical perturbations originating in mathematical libraries, which we simulate using Monte-Carlo arithmetic in a framework called fuzzy libmath (FL). We applied this methodology to pre-processing pipelines of the Human Connectome Project, a flagship open-data project in neuroimaging. We found that FL-perturbed pipelines accurately reproduce the variability induced by OS updates and that this similarity is only mildly dependent on simulation parameters. Importantly, we also found between-subject differences were preserved in both cases, though the between-run variability was of comparable magnitude for both FL and OS perturbations. We found the numerical precision in the HCP pre-processed images to be relatively low, with less than 8 significant bits among the 24 available, which motivates further investigation of the numerical stability of components in the tested pipeline. Overall, our results establish that FL accurately simulates results variability due to OS updates, and is a practical framework to quantify numerical uncertainty in neuroimaging.
Large, open-source consortium datasets have spurred the development of new and increasingly powerful machine learning approaches in brain connectomics. However, one key question remains: are we capturing biologically relevant and generalizable information about the brain, or are we simply overfitting to the data? To answer this, we organized a scientific challenge, the Connectomics in NeuroImaging Transfer Learning Challenge (CNI-TLC), held in conjunction with MICCAI 2019. CNI-TLC included two classification tasks: (1) diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD) within a pre-adolescent cohort; and (2) transference of the ADHD model to a related cohort of Autism Spectrum Disorder (ASD) patients with an ADHD comorbidity. In total, 240 resting-state fMRI time series averaged according to three standard parcellation atlases, along with clinical diagnosis, were released for training and validation (120 neurotypical controls and 120 ADHD). We also provided demographic information of age, sex, IQ, and handedness. A second set of 100 subjects (50 neurotypical controls, 25 ADHD, and 25 ASD with ADHD comorbidity) was used for testing. Models were submitted in a standardized format as Docker images through ChRIS, an open-source image analysis platform. Utilizing an inclusive approach, we ranked the methods based on 16 different metrics. The final rank was calculated using the rank product for each participant across all measures. Furthermore, we assessed the calibration curves of each method. Five participants submitted their model for evaluation, with one outperforming all other methods in both ADHD and ASD classification. However, further improvements are needed to reach the clinical translation of functional connectomics. We are keeping the CNI-TLC open as a publicly available resource for developing and validating new classification methodologies in the field of connectomics.