No Arabic abstract
Purpose: Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative MRI image analysis in daily clinical practice. In this study, we present a deep learning-based approach for MRI image INU correction. Method: We developed a residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected MRI images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 30 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons were made among the proposed method and other approaches. Result: Our res-cycle GAN based method achieved higher accuracy and better tissue uniformity compared to the other algorithms. Moreover, once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters. Conclusion: In this study, a deep learning based automatic INU correction method in MRI, namely, res-cycle GAN has been investigated. The results show that learning based methods can achieve promising accuracy, while highly speeding up the correction through avoiding the unintuitive parameter tuning process in N4ITK correction.
This chapter reviews recent developments of generative adversarial networks (GAN)-based methods for medical and biomedical image synthesis tasks. These methods are classified into conditional GAN and Cycle-GAN according to the network architecture designs. For each category, a literature survey is given, which covers discussions of the network architecture designs, highlights important contributions and identifies specific challenges.
The MR-Linac is a combination of an MR-scanner and radiotherapy linear accelerator (Linac) which holds the promise to increase the precision of radiotherapy treatments with MR-guided radiotherapy by monitoring motion during radiotherapy with MRI, and adjusting the radiotherapy plan accordingly. Optimal MR-guidance for respiratory motion during radiotherapy requires MR-based 3D motion estimation with a latency of 200-500 ms. Currently this is still challenging since typical methods rely on MR-images, and are therefore limited by the 3D MR-imaging latency. In this work, we present a method to perform non-rigid 3D respiratory motion estimation with 170 ms latency, including both acquisition and reconstruction. The proposed method called real-time low-rank MR-MOTUS reconstructs motion-fields directly from k-space data, and leverages an explicit low-rank decomposition of motion-fields to split the large scale 3D+t motion-field reconstruction problem posed in our previous work into two parts: (I) a medium-scale offline preparation phase and (II) a small-scale online inference phase which exploits the results of the offline phase for real-time computations. The method was validated on free-breathing data of five volunteers, acquired with a 1.5T Elekta Unity MR-Linac. Results show that the reconstructed 3D motion-field are anatomically plausible, highly correlated with a self-navigation motion surrogate (R = 0.975 +/- 0.0110), and can be reconstructed with a total latency of 170 ms that is sufficient for real-time MR-guided abdominal radiotherapy.
The susceptibility-based positive contrast MR technique was applied to estimate arbitrary magnetic susceptibility distributions of the metallic devices using a kernel deconvolution algorithm with a regularized L-1 minimization.Previously, the first-order primal-dual (PD) algorithm could provide a faster reconstruction time to solve the L-1 minimization, compared with other methods. Here, we propose to accelerate the PD algorithm of the positive contrast image using the multi-core multi-thread feature of graphics processor units (GPUs). The some experimental results showed that the GPU-based PD algorithm could achieve comparable accuracy of the metallic interventional devices in positive contrast imaging with less computational time. And the GPU-based PD approach was 4~15 times faster than the previous CPU-based scheme.
Structure-preserved denoising of 3D magnetic resonance imaging (MRI) images is a critical step in medical image analysis. Over the past few years, many algorithms with impressive performances have been proposed. In this paper, inspired by the idea of deep learning, we introduce an MRI denoising method based on the residual encoder-decoder Wasserstein generative adversarial network (RED-WGAN). Specifically, to explore the structure similarity between neighboring slices, a 3D configuration is utilized as the basic processing unit. Residual autoencoders combined with deconvolution operations are introduced into the generator network. Furthermore, to alleviate the oversmoothing shortcoming of the traditional mean squared error (MSE) loss function, the perceptual similarity, which is implemented by calculating the distances in the feature space extracted by a pretrained VGG-19 network, is incorporated with the MSE and adversarial losses to form the new loss function. Extensive experiments are implemented to assess the performance of the proposed method. The experimental results show that the proposed RED-WGAN achieves performance superior to several state-of-the-art methods in both simulated and real clinical data. In particular, our method demonstrates powerful abilities in both noise suppression and structure preservation.
Arterial spin labeling (ASL) magnetic resonance imaging (MRI) is a powerful imaging technology that can measure cerebral blood flow (CBF) quantitatively. However, since only a small portion of blood is labeled compared to the whole tissue volume, conventional ASL suffers from low signal-to-noise ratio (SNR), poor spatial resolution, and long acquisition time. In this paper, we proposed a super-resolution method based on a multi-scale generative adversarial network (GAN) through unsupervised training. The network only needs the low-resolution (LR) ASL image itself for training and the T1-weighted image as the anatomical prior. No training pairs or pre-training are needed. A low-pass filter guided item was added as an additional loss to suppress the noise interference from the LR ASL image. After the network was trained, the super-resolution (SR) image was generated by supplying the upsampled LR ASL image and corresponding T1-weighted image to the generator of the last layer. Performance of the proposed method was evaluated by comparing the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) using normal-resolution (NR) ASL image (5.5 min acquisition) and high-resolution (HR) ASL image (44 min acquisition) as the ground truth. Compared to the nearest, linear, and spline interpolation methods, the proposed method recovers more detailed structure information, reduces the image noise visually, and achieves the highest PSNR and SSIM when using HR ASL image as the ground-truth.