ترغب بنشر مسار تعليمي؟ اضغط هنا

Spatially Regularized Parametric Map Reconstruction for Fast Magnetic Resonance Fingerprinting

373   0   0.0 ( 0 )
 نشر من قبل Fabian Balsiger
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Magnetic resonance fingerprinting (MRF) provides a unique concept for simultaneous and fast acquisition of multiple quantitative MR parameters. Despite acquisition efficiency, adoption of MRF into the clinics is hindered by its dictionary matching-based reconstruction, which is computationally demanding and lacks scalability. Here, we propose a convolutional neural network-based reconstruction, which enables both accurate and fast reconstruction of parametric maps, and is adaptable based on the needs of spatial regularization and the capacity for the reconstruction. We evaluated the method using MRF T1-FF, an MRF sequence for T1 relaxation time of water (T1H2O) and fat fraction (FF) mapping. We demonstrate the methods performance on a highly heterogeneous dataset consisting of 164 patients with various neuromuscular diseases imaged at thighs and legs. We empirically show the benefit of incorporating spatial regularization during the reconstruction and demonstrate that the method learns meaningful features from MR physics perspective. Further, we investigate the ability of the method to handle highly heterogeneous morphometric variations and its generalization to anatomical regions unseen during training. The obtained results outperform the state-of-the-art in deep learning-based MRF reconstruction. The method achieved normalized root mean squared errors of 0.048 $pm$ 0.011 for T1H2O maps and 0.027 $pm$ 0.004 for FF maps when compared to the dictionary matching in a test set of 50 patients. Coupled with fast MRF sequences, the proposed method has the potential of enabling multiparametric MR imaging in clinically feasible time.

قيم البحث

اقرأ أيضاً

149 - Elisabeth Hoppe 2019
Recently, Magnetic Resonance Fingerprinting (MRF) was proposed as a quantitative imaging technique for the simultaneous acquisition of tissue parameters such as relaxation times $T_1$ and $T_2$. Although the acquisition is highly accelerated, the sta te-of-the-art reconstruction suffers from long computation times: Template matching methods are used to find the most similar signal to the measured one by comparing it to pre-simulated signals of possible parameter combinations in a discretized dictionary. Deep learning approaches can overcome this limitation, by providing the direct mapping from the measured signal to the underlying parameters by one forward pass through a network. In this work, we propose a Recurrent Neural Network (RNN) architecture in combination with a novel quantile layer. RNNs are well suited for the processing of time-dependent signals and the quantile layer helps to overcome the noisy outliers by considering the spatial neighbors of the signal. We evaluate our approach using in-vivo data from multiple brain slices and several volunteers, running various experiments. We show that the RNN approach with small patches of complex-valued input signals in combination with a quantile layer outperforms other architectures, e.g. previously proposed CNNs for the MRF reconstruction reducing the error in $T_1$ and $T_2$ by more than 80%.
Magnetic resonance Fingerprinting (MRF) is a relatively new multi-parametric quantitative imaging method that involves a two-step process: (i) reconstructing a series of time frames from highly-undersampled non-Cartesian spiral k-space data and (ii) pattern matching using the time frames to infer tissue properties (e.g., T1 and T2 relaxation times). In this paper, we introduce a novel end-to-end deep learning framework to seamlessly map the tissue properties directly from spiral k-space MRF data, thereby avoiding time-consuming processing such as the nonuniform fast Fourier transform (NUFFT) and the dictionary-based Fingerprint matching. Our method directly consumes the non-Cartesian k- space data, performs adaptive density compensation, and predicts multiple tissue property maps in one forward pass. Experiments on both 2D and 3D MRF data demonstrate that quantification accuracy comparable to state-of-the-art methods can be accomplished within 0.5 second, which is 1100 to 7700 times faster than the original MRF framework. The proposed method is thus promising for facilitating the adoption of MRF in clinical settings.
Purpose: To study the effects of magnetization transfer (MT, in which a semisolid spin pool interacts with the free pool), in the context of magnetic resonance fingerprinting (MRF). Methods: Simulations and phantom experiments were performed to stu dy the impact of MT on the MRF signal and its potential influence on T1 and T2 estimation. Subsequently, an MRF sequence implementing off-resonance MT pulses and a dictionary with an MT dimension by incorporating a two-pool model were used to estimate the fractional pool size in addition to the B1+, T1, and T2 values. The proposed method was evaluated in the human brain. Results: Simulations and phantom experiments showed that an MRF signal obtained from a cross-linked bovine serum sample is influenced by MT. Using a dictionary based on an MT model, a better match between simulations and acquired MR signals can be obtained (NRMSE 1.3% versus 4.7%). Adding off-resonance MT pulses can improve the differentiation of MT from T1 and T2. In-vivo results showed that MT affects the MRF signals from white matter (fractional pool-size ~16%) and gray matter (fractional pool-size ~10%). Furthermore, longer T1 (~1060 ms versus ~860 ms) and T2 values (~47 ms versus ~35 ms) can be observed in white matter if MT is accounted for. Conclusion: Our experiments demonstrated a potential influence of MT on the quantification of T1 and T2 with MRF. A model that encompasses MT effects can improve the accuracy of estimated relaxation parameters and allows quantification of the fractional pool size.
Recent works have demonstrated that deep learning (DL) based compressed sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by reconstructing MR images from sub-sampled k-space data. However, network architectures adopted in pr evious methods are all designed by handcraft. Neural Architecture Search (NAS) algorithms can automatically build neural network architectures which have outperformed human designed ones in several vision tasks. Inspired by this, here we proposed a novel and efficient network for the MR image reconstruction problem via NAS instead of manual attempts. Particularly, a specific cell structure, which was integrated into the model-driven MR reconstruction pipeline, was automatically searched from a flexible pre-defined operation search space in a differentiable manner. Experimental results show that our searched network can produce better reconstruction results compared to previous state-of-the-art methods in terms of PSNR and SSIM with 4-6 times fewer computation resources. Extensive experiments were conducted to analyze how hyper-parameters affect reconstruction performance and the searched structures. The generalizability of the searched architecture was also evaluated on different organ MR datasets. Our proposed method can reach a better trade-off between computation cost and reconstruction performance for MR reconstruction problem with good generalizability and offer insights to design neural networks for other medical image applications. The evaluation code will be available at https://github.com/yjump/NAS-for-CSMRI.
Fast and accurate reconstruction of magnetic resonance (MR) images from under-sampled data is important in many clinical applications. In recent years, deep learning-based methods have been shown to produce superior performance on MR image reconstruc tion. However, these methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations. In order to overcome this challenge, we propose a federated learning (FL) based solution in which we take advantage of the MR data available at different institutions while preserving patients privacy. However, the generalizability of models trained with the FL setting can still be suboptimal due to domain shift, which results from the data collected at multiple institutions with different sensors, disease types, and acquisition protocols, etc. With the motivation of circumventing this challenge, we propose a cross-site modeling for MR image reconstruction in which the learned intermediate latent features among different source sites are aligned with the distribution of the latent features at the target site. Extensive experiments are conducted to provide various insights about FL for MR image reconstruction. Experimental results demonstrate that the proposed framework is a promising direction to utilize multi-institutional data without compromising patients privacy for achieving improved MR image reconstruction. Our code will be available at https://github.com/guopengf/FLMRCM.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا