ترغب بنشر مسار تعليمي؟ اضغط هنا

Multiscale Comparative Connectomics

97   0   0.0 ( 0 )
 نشر من قبل Vivek Gopalakrishnan
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

A connectome is a map of the structural and/or functional connections in the brain. This information-rich representation has the potential to transform our understanding of the relationship between patterns in brain connectivity and neurological processes, disorders, and diseases. However, existing computational techniques used to analyze connectomes are often insufficient for interrogating multi-subject connectomics datasets. Several methods are either solely designed to analyze single connectomes, or leverage heuristic graph invariants that ignore the complete topology of connections between brain regions. To enable more rigorous comparative connectomics analysis, we introduce robust and interpretable statistical methods motivated by recent theoretical advances in random graph models. These methods enable simultaneous analysis of multiple connectomes across different scales of network topology, facilitating the discovery of hierarchical brain structures that vary in relation with phenotypic profiles. We validated these methods through extensive simulation studies, as well as synthetic and real-data experiments. Using a set of high-resolution connectomes obtained from genetically distinct mouse strains (including the BTBR mouse -- a standard model of autism -- and three behavioral wild-types), we show that these methods uncover valuable latent information in multi-subject connectomics data and yield novel insights into the connective correlates of neurological phenotypes.



قيم البحث

اقرأ أيضاً

Large, open-source consortium datasets have spurred the development of new and increasingly powerful machine learning approaches in brain connectomics. However, one key question remains: are we capturing biologically relevant and generalizable inform ation about the brain, or are we simply overfitting to the data? To answer this, we organized a scientific challenge, the Connectomics in NeuroImaging Transfer Learning Challenge (CNI-TLC), held in conjunction with MICCAI 2019. CNI-TLC included two classification tasks: (1) diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD) within a pre-adolescent cohort; and (2) transference of the ADHD model to a related cohort of Autism Spectrum Disorder (ASD) patients with an ADHD comorbidity. In total, 240 resting-state fMRI time series averaged according to three standard parcellation atlases, along with clinical diagnosis, were released for training and validation (120 neurotypical controls and 120 ADHD). We also provided demographic information of age, sex, IQ, and handedness. A second set of 100 subjects (50 neurotypical controls, 25 ADHD, and 25 ASD with ADHD comorbidity) was used for testing. Models were submitted in a standardized format as Docker images through ChRIS, an open-source image analysis platform. Utilizing an inclusive approach, we ranked the methods based on 16 different metrics. The final rank was calculated using the rank product for each participant across all measures. Furthermore, we assessed the calibration curves of each method. Five participants submitted their model for evaluation, with one outperforming all other methods in both ADHD and ASD classification. However, further improvements are needed to reach the clinical translation of functional connectomics. We are keeping the CNI-TLC open as a publicly available resource for developing and validating new classification methodologies in the field of connectomics.
Working memory (WM) allows information to be stored and manipulated over short time scales. Performance on WM tasks is thought to be supported by the frontoparietal system (FPS), the default mode system (DMS), and interactions between them. Yet littl e is known about how these systems and their interactions relate to individual differences in WM performance. We address this gap in knowledge using functional MRI data acquired during the performance of a 2-back WM task, as well as diffusion tensor imaging data collected in the same individuals. We show that the strength of functional interactions between the FPS and DMS during task engagement is inversely correlated with WM performance, and that this strength is modulated by the activation of FPS regions but not DMS regions. Next, we use a clustering algorithm to identify two distinct subnetworks of the FPS, and find that these subnetworks display distinguishable patterns of gene expression. Activity in one subnetwork is positively associated with the strength of FPS-DMS functional interactions, while activity in the second subnetwork is negatively associated. Further, the pattern of structural linkages of these subnetworks explains their differential capacity to influence the strength of FPS-DMS functional interactions. To determine whether these observations could provide a mechanistic account of large-scale neural underpinnings of WM, we build a computational model of the system composed of coupled oscillators. Modulating the amplitude of the subnetworks in the model causes the expected change in the strength of FPS-DMS functional interactions, thereby offering support for a mechanism in which subnetwork activity tunes functional interactions. Broadly, our study presents a holistic account of how regional activity, functional interactions, and structural linkages together support individual differences in WM in humans.
Simplistic estimation of neural connectivity in MEEG sensor space is impossible due to volume conduction. The only viable alternative is to carry out connectivity estimation in source space. Among the neuroscience community this is claimed to be impo ssible or misleading due to Leakage: linear mixing of the reconstructed sources. To address this problematic we propose a novel solution method that caulks the Leakage in MEEG source activity and connectivity estimates: BC-VARETA. It is based on a joint estimation of source activity and connectivity in the frequency domain representation of MEEG time series. To achieve this, we go beyond current methods that assume a fixed gaussian graphical model for source connectivity. In contrast we estimate this graphical model in a Bayesian framework by placing priors on it, which allows for highly optimized computations of the connectivity, via a new procedure based on the local quadratic approximation under quite general prior models. A further contribution of this paper is the rigorous definition of leakage via the Spatial Dispersion Measure and Earth Movers Distance based on the geodesic distances over the cortical manifold. Both measures are extended for the first time to quantify Connectivity Leakage by defining them on the cartesian product of cortical manifolds. Using these measures, we show that BC-VARETA outperforms most state of the art inverse solvers by several orders of magnitude.
The field of connectomics faces unprecedented big data challenges. To reconstruct neuronal connectivity, automated pixel-level segmentation is required for petabytes of streaming electron microscopy data. Existing algorithms provide relatively good a ccuracy but are unacceptably slow, and would require years to extract connectivity graphs from even a single cubic millimeter of neural tissue. Here we present a viable real-time solution, a multi-pass pipeline optimized for shared-memory multicore systems, capable of processing data at near the terabyte-per-hour pace of multi-beam electron microscopes. The pipeline makes an initial fast-pass over the data, and then makes a second slow-pass to iteratively correct errors in the output of the fast-pass. We demonstrate the accuracy of a sparse slow-pass reconstruction algorithm and suggest new methods for detecting morphological errors. Our fast-pass approach provided many algorithmic challenges, including the design and implementation of novel shallow convolutional neural nets and the parallelization of watershed and object-merging techniques. We use it to reconstruct, from image stack to skeletons, the full dataset of Kasthuri et al. (463 GB capturing 120,000 cubic microns) in a matter of hours on a single multicore machine rather than the weeks it has taken in the past on much larger distributed systems.
Hamiltonian Monte Carlo (HMC) has been widely adopted in the statistics community because of its ability to sample high-dimensional distributions much more efficiently than other Metropolis-based methods. Despite this, HMC often performs sub-optimall y on distributions with high correlations or marginal variances on multiple scales because the resulting stiffness forces the leapfrog integrator in HMC to take an unreasonably small stepsize. We provide intuition as well as a formal analysis showing how these multiscale distributions limit the stepsize of leapfrog and we show how the implicit midpoint method can be used, together with Newton-Krylov iteration, to circumvent this limitation and achieve major efficiency gains. Furthermore, we offer practical guidelines for when to choose between implicit midpoint and leapfrog and what stepsize to use for each method, depending on the distribution being sampled. Unlike previous modifications to HMC, our method is generally applicable to highly non-Gaussian distributions exhibiting multiple scales. We illustrate how our method can provide a dramatic speedup over leapfrog in the context of the No-U-Turn sampler (NUTS) applied to several examples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا