No Arabic abstract
Purpose: This work proposes a novel approach to efficiently generate MR fingerprints for MR fingerprinting (MRF) problems based on the unsupervised deep learning model generative adversarial networks (GAN). Methods: The GAN model is adopted and modified for better convergence and performance, resulting in an MRF specific model named GAN-MRF. The GAN-MRF model is trained, validated, and tested using different MRF fingerprints simulated from the Bloch equations with certain MRF sequence. The performance and robustness of the model are further tested by using in vivo data collected on a 3 Tesla scanner from a healthy volunteer together with MRF dictionaries with different sizes. T1, T2 maps are generated and compared quantitatively. Results: The validation and testing curves for the GAN-MRF model show no evidence of high bias or high variance problems. The sample MRF fingerprints generated from the trained GAN-MRF model agree well with the benchmark fingerprints simulated from the Bloch equations. The in vivo T1, T2 maps generated from the GAN-MRF fingerprints are in good agreement with those generated from the Bloch simulated fingerprints, showing good performance and robustness of the proposed GAN-MRF model. Moreover, the MRF dictionary generation time is reduced from hours to sub-second for the testing dictionary. Conclusion: The GAN-MRF model enables a fast and accurate generation of the MRF fingerprints. It significantly reduces the MRF dictionary generation process and opens the door for real-time applications and sequence optimization problems.
Magnetic resonance fingerprinting (MRF) enables fast and multiparametric MR imaging. Despite fast acquisition, the state-of-the-art reconstruction of MRF based on dictionary matching is slow and lacks scalability. To overcome these limitations, neural network (NN) approaches estimating MR parameters from fingerprints have been proposed recently. Here, we revisit NN-based MRF reconstruction to jointly learn the forward process from MR parameters to fingerprints and the backward process from fingerprints to MR parameters by leveraging invertible neural networks (INNs). As a proof-of-concept, we perform various experiments showing the benefit of learning the forward process, i.e., the Bloch simulations, for improved MR parameter estimation. The benefit especially accentuates when MR parameter estimation is difficult due to MR physical restrictions. Therefore, INNs might be a feasible alternative to the current solely backward-based NNs for MRF reconstruction.
Magnetic Resonance Fingerprinting (MRF) enables simultaneous mapping of multiple tissue parameters such as T1 and T2 relaxation times. The working principle of MRF relies on varying acquisition parameters pseudo-randomly, so that each tissue generates its unique signal evolution during scanning. Even though MRF provides faster scanning, it has disadvantages such as erroneous and slow generation of the corresponding parametric maps, which needs to be improved. Moreover, there is a need for explainable architectures for understanding the guiding signals to generate accurate parametric maps. In this paper, we addressed both of these shortcomings by proposing a novel neural network architecture consisting of a channel-wise attention module and a fully convolutional network. The proposed approach, evaluated over 3 simulated MRF signals, reduces error in the reconstruction of tissue parameters by 8.88% for T1 and 75.44% for T2 with respect to state-of-the-art methods. Another contribution of this study is a new channel selection method: attention-based channel selection. Furthermore, the effect of patch size and temporal frames of MRF signal on channel reduction are analyzed by employing a channel-wise attention.
Accuracy and consistency are two key factors in computer-assisted magnetic resonance (MR) image analysis. However, contrast variation from site to site caused by lack of standardization in MR acquisition impedes consistent measurements. In recent years, image harmonization approaches have been proposed to compensate for contrast variation in MR images. Current harmonization approaches either require cross-site traveling subjects for supervised training or heavily rely on site-specific harmonization models to encourage harmonization accuracy. These requirements potentially limit the application of current harmonization methods in large-scale multi-site studies. In this work, we propose an unsupervised MR harmonization framework, CALAMITI (Contrast Anatomy Learning and Analysis for MR Intensity Translation and Integration), based on information bottleneck theory. CALAMITI learns a disentangled latent space using a unified structure for multi-site harmonization without the need for traveling subjects. Our model is also able to adapt itself to harmonize MR images from a new site with fine tuning solely on images from the new site. Both qualitative and quantitative results show that the proposed method achieves superior performance compared with other unsupervised harmonization approaches.
The 3D morphology and quantitative assessment of knee articular cartilages (i.e., femoral, tibial, and patellar cartilage) in magnetic resonance (MR) imaging is of great importance for knee radiographic osteoarthritis (OA) diagnostic decision making. However, effective and efficient delineation of all the knee articular cartilages in large-sized and high-resolution 3D MR knee data is still an open challenge. In this paper, we propose a novel framework to solve the MR knee cartilage segmentation task. The key contribution is the adversarial learning based collaborative multi-agent segmentation network. In the proposed network, we use three parallel segmentation agents to label cartilages in their respective region of interest (ROI), and then fuse the three cartilages by a novel ROI-fusion layer. The collaborative learning is driven by an adversarial sub-network. The ROI-fusion layer not only fuses the individual cartilages from multiple agents, but also backpropagates the training loss from the adversarial sub-network to each agent to enable joint learning of shape and spatial constraints. Extensive evaluations are conducted on a dataset including hundreds of MR knee volumes with diverse populations, and the proposed method shows superior performance.
Cardiac MR image segmentation is essential for the morphological and functional analysis of the heart. Inspired by how experienced clinicians assess the cardiac morphology and function across multiple standard views (i.e. long- and short-axis views), we propose a novel approach which learns anatomical shape priors across different 2D standard views and leverages these priors to segment the left ventricular (LV) myocardium from short-axis MR image stacks. The proposed segmentation method has the advantage of being a 2D network but at the same time incorporates spatial context from multiple, complementary views that span a 3D space. Our method achieves accurate and robust segmentation of the myocardium across different short-axis slices (from apex to base), outperforming baseline models (e.g. 2D U-Net, 3D U-Net) while achieving higher data efficiency. Compared to the 2D U-Net, the proposed method reduces the mean Hausdorff distance (mm) from 3.24 to 2.49 on the apical slices, from 2.34 to 2.09 on the middle slices and from 3.62 to 2.76 on the basal slices on the test set, when only 10% of the training data was used.