No Arabic abstract
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique for studying brain activity. During an fMRI session, the subject executes a set of tasks (task-related fMRI study) or no tasks (resting-state fMRI), and a sequence of 3-D brain images is obtained for further analysis. In the course of fMRI, some sources of activation are caused by noise and artifacts. The removal of these sources is essential before the analysis of the brain activations. Deep Neural Network (DNN) architectures can be used for denoising and artifact removal. The main advantage of DNN models is the automatic learning of abstract and meaningful features, given the raw data. This work presents advanced DNN architectures for noise and artifact classification, using both spatial and temporal information in resting-state fMRI sessions. The highest performance is achieved by a voting schema using information from all the domains, with an average accuracy of over 98% and a very good balance between the metrics of sensitivity and specificity (98.5% and 97.5% respectively).
Long-range temporal coherence (LRTC) is quite common to dynamic systems and is fundamental to the system function. LRTC in the brain has been shown to be important to cognition. Assessing LRTC may provide critical information for understanding the potential underpinnings of brain organization, function, and cognition. To facilitate this overarching goal, we provide a method, which is named temporal coherence mapping (TCM), to explicitly quantify LRTC using resting state fMRI. TCM is based on correlation analysis of the transit states of the phase space reconstructed by temporal embedding. A few TCM properties were collected to measure LRTC, including the averaged correlation, anti-correlation, the ratio of correlation and anticorrelation, the mean coherent and incoherent duration, and the ratio between the coherent and incoherent time. TCM was first evaluated with simulations and then with the large Human Connectome Project data. Evaluation results showed that TCM metrics can successfully differentiate signals with different temporal coherence regardless of the parameters used to reconstruct the phase space. In human brain, TCM metrics except the ratio of the coherent/incoherent time showed high test-retest reproducibility; TCM metrics are related to age, sex, and total cognitive scores. In summary, TCM provides a first-of-its-kind tool to assess LRTC and the imbalance between coherence and incoherence; TCM properties are physiologically and cognitively meaningful.
Purpose: To introduce two novel learning-based motion artifact removal networks (LEARN) for the estimation of quantitative motion- and $B0$-inhomogeneity-corrected $R_2^ast$ maps from motion-corrupted multi-Gradient-Recalled Echo (mGRE) MRI data. Methods: We train two convolutional neural networks (CNNs) to correct motion artifacts for high-quality estimation of quantitative $B0$-inhomogeneity-corrected $R_2^ast$ maps from mGRE sequences. The first CNN, LEARN-IMG, performs motion correction on complex mGRE images, to enable the subsequent computation of high-quality motion-free quantitative $R_2^ast$ (and any other mGRE-enabled) maps using the standard voxel-wise analysis or machine-learning-based analysis. The second CNN, LEARN-BIO, is trained to directly generate motion- and $B0$-inhomogeneity-corrected quantitative $R_2^ast$ maps from motion-corrupted magnitude-only mGRE images by taking advantage of the biophysical model describing the mGRE signal decay. We show that both CNNs trained on synthetic MR images are capable of suppressing motion artifacts while preserving details in the predicted quantitative $R_2^ast$ maps. Significant reduction of motion artifacts on experimental in vivo motion-corrupted data has also been achieved by using our trained models. Conclusion: Both LEARN-IMG and LEARN-BIO can enable the computation of high-quality motion- and $B0$-inhomogeneity-corrected $R_2^ast$ maps. LEARN-IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of $R_2^ast$ maps, while LEARN-BIO directly performs motion- and $B0$-inhomogeneity-corrected $R_2^ast$ estimation. Both LEARN-IMG and LEARN-BIO jointly process all the available gradient echoes, which enables them to exploit spatial patterns available in the data. The high computational speed of LEARN-BIO is an advantage that can lead to a broader clinical application.
Task-free connectivity analyses have emerged as a powerful tool in functional neuroimaging. Because the cross-correlations that underlie connectivity measures are sensitive to distortion of time-series, here we used a novel dynamic phantom to provide a ground truth for dynamic fidelity between blood oxygen level dependent (BOLD)-like inputs and fMRI outputs. We found that the de facto quality-metric for task-free fMRI, temporal signal to noise ratio (tSNR), correlated inversely with dynamic fidelity; thus, studies optimized for tSNR actually produced time-series that showed the greatest distortion of signal dynamics. Instead, the phantom showed that dynamic fidelity is reasonably approximated by a measure that, unlike tSNR, dissociates signal dynamics from scanner artifact. We then tested this measure, signal fluctuation sensitivity (SFS), against human resting-state data. As predicted by the phantom, SFS--and not tSNR--is associated with enhanced sensitivity to both local and long-range connectivity within the brains default mode network.
The Blood-Oxygen-Level-Dependent (BOLD) signal of resting-state fMRI (rs-fMRI) records the temporal dynamics of intrinsic functional networks in the brain. However, existing deep learning methods applied to rs-fMRI either neglect the functional dependency between different brain regions in a network or discard the information in the temporal dynamics of brain activity. To overcome those shortcomings, we propose to formulate functional connectivity networks within the context of spatio-temporal graphs. We train a spatio-temporal graph convolutional network (ST-GCN) on short sub-sequences of the BOLD time series to model the non-stationary nature of functional connectivity. Simultaneously, the model learns the importance of graph edges within ST-GCN to gain insight into the functional connectivities contributing to the prediction. In analyzing the rs-fMRI of the Human Connectome Project (HCP, N=1,091) and the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA, N=773), ST-GCN is significantly more accurate than common approaches in predicting gender and age based on BOLD signals. Furthermore, the brain regions and functional connections significantly contributing to the predictions of our model are important markers according to the neuroscience literature.
Simultaneous EEG-fMRI acquisition and analysis technology has been widely used in various research fields of brain science. However, how to remove the ballistocardiogram (BCG) artifacts in this scenario remains a huge challenge. Because it is impossible to obtain clean and BCG-contaminated EEG signals at the same time, BCG artifact removal is a typical unpaired signal-to-signal problem. To solve this problem, this paper proposed a new GAN training model - Single Shot Reversible GAN (SSRGAN). The model is allowing bidirectional input to better combine the characteristics of the two types of signals, instead of using two independent models for bidirectional conversion as in the past. Furthermore, the model is decomposed into multiple independent convolutional blocks with specific functions. Through additional training of the blocks, the local representation ability of the model is improved, thereby improving the overall model performance. Experimental results show that, compared with existing methods, the method proposed in this paper can remove BCG artifacts more effectively and retain the useful EEG information.