Do you want to publish a course? Click here

Tensor Analysis and Fusion of Multimodal Brain Images

398   0   0.0 ( 0 )
 Added by Esin Karahan
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Current high-throughput data acquisition technologies probe dynamical systems with different imaging modalities, generating massive data sets at different spatial and temporal resolutions posing challenging problems in multimodal data fusion. A case in point is the attempt to parse out the brain structures and networks that underpin human cognitive processes by analysis of different neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the multimodal, multi-scale nature of neuroimaging data is well reflected by a multi-way (tensor) structure where the underlying processes can be summarized by a relatively small number of components or atoms. We introduce Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network notation in order to analyze these models. These diagrams not only clarify matrix and tensor EEG and fMRI time/frequency analysis and inverse problems, but also help understand multimodal fusion via Multiway Partial Least Squares and Coupled Matrix-Tensor Factorization. We show here, for the first time, that Granger causal analysis of brain networks is a tensor regression problem, thus allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI recordings shows the potential of the methods and suggests their use in other scientific domains.

rate research

Read More

Multimodal fusion benefits disease diagnosis by providing a more comprehensive perspective. Developing algorithms is challenging due to data heterogeneity and the complex within- and between-modality associations. Deep-network-based data-fusion models have been developed to capture the complex associations and the performance in diagnosis has been improved accordingly. Moving beyond diagnosis prediction, evaluation of disease mechanisms is critically important for biomedical research. Deep-network-based data-fusion models, however, are difficult to interpret, bringing about difficulties for studying biological mechanisms. In this work, we develop an interpretable multimodal fusion model, namely gCAM-CCL, which can perform automated diagnosis and result interpretation simultaneously. The gCAM-CCL model can generate interpretable activation maps, which quantify pixel-level contributions of the input features. This is achieved by combining intermediate feature maps using gradient-based weights. Moreover, the estimated activation maps are class-specific, and the captured cross-data associations are interest/label related, which further facilitates class-specific analysis and biological mechanism analysis. We validate the gCAM-CCL model on a brain imaging-genetic study, and show gCAM-CCLs performed well for both classification and mechanism analysis. Mechanism analysis suggests that during task-fMRI scans, several object recognition related regions of interests (ROIs) are first activated and then several downstream encoding ROIs get involved. Results also suggest that the higher cognition performing group may have stronger neurotransmission signaling while the lower cognition performing group may have problem in brain/neuron development, resulting from genetic variations.
299 - Tianqi Liu , Ming Yuan , 2017
Spatiotemporal gene expression data of the human brain offer insights on the spa- tial and temporal patterns of gene regulation during brain development. Most existing methods for analyzing these data consider spatial and temporal profiles separately with the implicit assumption that different brain regions develop in similar trajectories, and that the spatial patterns of gene expression remain similar at different time points. Al- though these analyses may help delineate gene regulation either spatially or temporally, they are not able to characterize heterogeneity in temporal dynamics across different brain regions, or the evolution of spatial patterns of gene regulation over time. In this article, we develop a statistical method based on low rank tensor decomposition to more effectively analyze spatiotemporal gene expression data. We generalize the clas- sical principal component analysis (PCA) which is applicable only to data matrices, to tensor PCA that can simultaneously capture spatial and temporal effects. We also propose an efficient algorithm that combines tensor unfolding and power iteration to estimate the tensor principal components, and provide guarantees on their statistical performances. Numerical experiments are presented to further demonstrate the mer- its of the proposed method. An application of our method to a spatiotemporal brain expression data provides insights on gene regulation patterns in the brain.
In this paper, we propose MGNet, a simple and effective multiplex graph convolutional network (GCN) model for multimodal brain network analysis. The proposed method integrates tensor representation into the multiplex GCN model to extract the latent structures of a set of multimodal brain networks, which allows an intuitive grasping of the common space for multimodal data. Multimodal representations are then generated with multiplex GCNs to capture specific graph structures. We conduct classification task on two challenging real-world datasets (HIV and Bipolar disorder), and the proposed MGNet demonstrates state-of-the-art performance compared to competitive benchmark methods. Apart from objective evaluations, this study may bear special significance upon network theory to the understanding of human connectome in different modalities. The code is available at https://github.com/ZhaomingKong/MGNets.
This paper introduces the functional tensor singular value decomposition (FTSVD), a novel dimension reduction framework for tensors with one functional mode and several tabular modes. The problem is motivated by high-order longitudinal data analysis. Our model assumes the observed data to be a random realization of an approximate CP low-rank functional tensor measured on a discrete time grid. Incorporating tensor algebra and the theory of Reproducing Kernel Hilbert Space (RKHS), we propose a novel RKHS-based constrained power iteration with spectral initialization. Our method can successfully estimate both singular vectors and functions of the low-rank structure in the observed data. With mild assumptions, we establish the non-asymptotic contractive error bounds for the proposed algorithm. The superiority of the proposed framework is demonstrated via extensive experiments on both simulated and real data.
To explain individual differences in development, behavior, and cognition, most previous studies focused on projecting resting-state functional MRI (fMRI) based functional connectivity (FC) data into a low-dimensional space via linear dimensionality reduction techniques, followed by executing analysis operations. However, linear dimensionality analysis techniques may fail to capture nonlinearity of brain neuroactivity. Moreover, besides resting-state FC, FC based on task fMRI can be expected to provide complementary information. Motivated by these considerations, we nonlinearly fuse resting-state and task-based FC networks (FCNs) to seek a better representation in this paper. We propose a framework based on alternating diffusion map (ADM), which extracts geometry-preserving low-dimensional embeddings that successfully parameterize the intrinsic variables driving the phenomenon of interest. Specifically, we first separately build resting-state and task-based FCNs by symmetric positive definite matrices using sparse inverse covariance estimation for each subject, and then utilize the ADM to fuse them in order to extract significant low-dimensional embeddings, which are used as fingerprints to identify individuals. The proposed framework is validated on the Philadelphia Neurodevelopmental Cohort data, where we conduct extensive experimental study on resting-state and fractal $n$-back task fMRI for the classification of intelligence quotient (IQ). The fusion of resting-state and $n$-back task fMRI by the proposed framework achieves better classification accuracy than any single fMRI, and the proposed framework is shown to outperform several other data fusion methods. To our knowledge, this paper is the first to demonstrate a successful extension of the ADM to fuse resting-state and task-based fMRI data for accurate prediction of IQ.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا