Do you want to publish a course? Click here

Deep learning using a biophysical model for Robust and Accelerated Reconstruction (RoAR) of quantitative and artifact-free R2* images

98   0   0.0 ( 0 )
 Added by Ulugbek Kamilov
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Purpose: To introduce a novel deep learning method for Robust and Accelerated Reconstruction (RoAR) of quantitative and B0-inhomogeneity-corrected R2* maps from multi-gradient recalled echo (mGRE) MRI data. Methods: RoAR trains a convolutional neural network (CNN) to generate quantitative R2* maps free from field inhomogeneity artifacts by adopting a self-supervised learning strategy given (a) mGRE magnitude images, (b) the biophysical model describing mGRE signal decay, and (c) preliminary-evaluated F-function accounting for contribution of macroscopic B0 field inhomogeneities. Importantly, no ground-truth R2* images are required and F-function is only needed during RoAR training but not application. Results: We show that RoAR preserves all features of R2* maps while offering significant improvements over existing methods in computation speed (seconds vs. hours) and reduced sensitivity to noise. Even for data with SNR=5 RoAR produced R2* maps with accuracy of 22% while voxel-wise analysis accuracy was 47%. For SNR=10 the RoAR accuracy increased to 17% vs. 24% for direct voxel-wise analysis. Conclusion: RoAR is trained to recognize the macroscopic magnetic field inhomogeneities directly from the input magnitude-only mGRE data and eliminate their effect on R2* measurements. RoAR training is based on the biophysical model and does not require ground-truth R2* maps. Since RoAR utilizes signal information not just from individual voxels but also accounts for spatial patterns of the signals in the images, it reduces the sensitivity of R2* maps to the noise in the data. These features plus high computational speed provide significant benefits for the potential usage of RoAR in clinical settings.



rate research

Read More

High spatial and temporal resolution across the whole brain is essential to accurately resolve neural activities in fMRI. Therefore, accelerated imaging techniques target improved coverage with high spatio-temporal resolution. Simultaneous multi-slice (SMS) imaging combined with in-plane acceleration are used in large studies that involve ultrahigh field fMRI, such as the Human Connectome Project. However, for even higher acceleration rates, these methods cannot be reliably utilized due to aliasing and noise artifacts. Deep learning (DL) reconstruction techniques have recently gained substantial interest for improving highly-accelerated MRI. Supervised learning of DL reconstructions generally requires fully-sampled training datasets, which is not available for high-resolution fMRI studies. To tackle this challenge, self-supervised learning has been proposed for training of DL reconstruction with only undersampled datasets, showing similar performance to supervised learning. In this study, we utilize a self-supervised physics-guided DL reconstruction on a 5-fold SMS and 4-fold in-plane accelerated 7T fMRI data. Our results show that our self-supervised DL reconstruction produce high-quality images at this 20-fold acceleration, substantially improving on existing methods, while showing similar functional precision and temporal effects in the subsequent analysis compared to a standard 10-fold accelerated acquisition.
Deep Learning (DL) has shown potential in accelerating Magnetic Resonance Image acquisition and reconstruction. Nevertheless, there is a dearth of tailored methods to guarantee that the reconstruction of small features is achieved with high fidelity. In this work, we employ adversarial attacks to generate small synthetic perturbations, which are difficult to reconstruct for a trained DL reconstruction network. Then, we use robust training to increase the networks sensitivity to these small features and encourage their reconstruction. Next, we investigate the generalization of said approach to real world features. For this, a musculoskeletal radiologist annotated a set of cartilage and meniscal lesions from the knee Fast-MRI dataset, and a classification network was devised to assess the reconstruction of the features. Experimental results show that by introducing robust training to a reconstruction network, the rate of false negative features (4.8%) in image reconstruction can be reduced. These results are encouraging, and highlight the necessity for attention to this problem by the image reconstruction community, as a milestone for the introduction of DL reconstruction in clinical practice. To support further research, we make our annotations and code publicly available at https://github.com/fcaliva/fastMRI_BB_abnormalities_annotation.
Purpose: To introduce two novel learning-based motion artifact removal networks (LEARN) for the estimation of quantitative motion- and $B0$-inhomogeneity-corrected $R_2^ast$ maps from motion-corrupted multi-Gradient-Recalled Echo (mGRE) MRI data. Methods: We train two convolutional neural networks (CNNs) to correct motion artifacts for high-quality estimation of quantitative $B0$-inhomogeneity-corrected $R_2^ast$ maps from mGRE sequences. The first CNN, LEARN-IMG, performs motion correction on complex mGRE images, to enable the subsequent computation of high-quality motion-free quantitative $R_2^ast$ (and any other mGRE-enabled) maps using the standard voxel-wise analysis or machine-learning-based analysis. The second CNN, LEARN-BIO, is trained to directly generate motion- and $B0$-inhomogeneity-corrected quantitative $R_2^ast$ maps from motion-corrupted magnitude-only mGRE images by taking advantage of the biophysical model describing the mGRE signal decay. We show that both CNNs trained on synthetic MR images are capable of suppressing motion artifacts while preserving details in the predicted quantitative $R_2^ast$ maps. Significant reduction of motion artifacts on experimental in vivo motion-corrupted data has also been achieved by using our trained models. Conclusion: Both LEARN-IMG and LEARN-BIO can enable the computation of high-quality motion- and $B0$-inhomogeneity-corrected $R_2^ast$ maps. LEARN-IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of $R_2^ast$ maps, while LEARN-BIO directly performs motion- and $B0$-inhomogeneity-corrected $R_2^ast$ estimation. Both LEARN-IMG and LEARN-BIO jointly process all the available gradient echoes, which enables them to exploit spatial patterns available in the data. The high computational speed of LEARN-BIO is an advantage that can lead to a broader clinical application.
We present a deep network interpolation strategy for accelerated parallel MR image reconstruction. In particular, we examine the network interpolation in parameter space between a source model that is formulated in an unrolled scheme with L1 and SSIM losses and its counterpart that is trained with an adversarial loss. We show that by interpolating between the two different models of the same network structure, the new interpolated network can model a trade-off between perceptual quality and fidelity.
Purpose: To develop a Breast Imaging Reporting and Data System (BI-RADS) breast density deep learning (DL) model in a multi-site setting for synthetic two-dimensional mammography (SM) images derived from digital breast tomosynthesis exams using full-field digital mammography (FFDM) images and limited SM data. Materials and Methods: A DL model was trained to predict BI-RADS breast density using FFDM images acquired from 2008 to 2017 (Site 1: 57492 patients, 187627 exams, 750752 images) for this retrospective study. The FFDM model was evaluated using SM datasets from two institutions (Site 1: 3842 patients, 3866 exams, 14472 images, acquired from 2016 to 2017; Site 2: 7557 patients, 16283 exams, 63973 images, 2015 to 2019). Each of the three datasets were then split into training, validation, and test datasets. Adaptation methods were investigated to improve performance on the SM datasets and the effect of dataset size on each adaptation method is considered. Statistical significance was assessed using confidence intervals (CI), estimated by bootstrapping. Results: Without adaptation, the model demonstrated substantial agreement with the original reporting radiologists for all three datasets (Site 1 FFDM: linearly-weighted $kappa_w$ = 0.75 [95% CI: 0.74, 0.76]; Site 1 SM: $kappa_w$ = 0.71 [95% CI: 0.64, 0.78]; Site 2 SM: $kappa_w$ = 0.72 [95% CI: 0.70, 0.75]). With adaptation, performance improved for Site 2 (Site 1: $kappa_w$ = 0.72 [95% CI: 0.66, 0.79], 0.71 vs 0.72, P = .80; Site 2: $kappa_w$ = 0.79 [95% CI: 0.76, 0.81], 0.72 vs 0.79, P $<$ .001) using only 500 SM images from that site. Conclusion: A BI-RADS breast density DL model demonstrated strong performance on FFDM and SM images from two institutions without training on SM images and improved using few SM images.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا