ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning-based Motion Artifact Removal Networks (LEARN) for Quantitative $R_2^ast$ Mapping

205   0   0.0 ( 0 )
 نشر من قبل Xiaojian Xu
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Purpose: To introduce two novel learning-based motion artifact removal networks (LEARN) for the estimation of quantitative motion- and $B0$-inhomogeneity-corrected $R_2^ast$ maps from motion-corrupted multi-Gradient-Recalled Echo (mGRE) MRI data. Methods: We train two convolutional neural networks (CNNs) to correct motion artifacts for high-quality estimation of quantitative $B0$-inhomogeneity-corrected $R_2^ast$ maps from mGRE sequences. The first CNN, LEARN-IMG, performs motion correction on complex mGRE images, to enable the subsequent computation of high-quality motion-free quantitative $R_2^ast$ (and any other mGRE-enabled) maps using the standard voxel-wise analysis or machine-learning-based analysis. The second CNN, LEARN-BIO, is trained to directly generate motion- and $B0$-inhomogeneity-corrected quantitative $R_2^ast$ maps from motion-corrupted magnitude-only mGRE images by taking advantage of the biophysical model describing the mGRE signal decay. We show that both CNNs trained on synthetic MR images are capable of suppressing motion artifacts while preserving details in the predicted quantitative $R_2^ast$ maps. Significant reduction of motion artifacts on experimental in vivo motion-corrupted data has also been achieved by using our trained models. Conclusion: Both LEARN-IMG and LEARN-BIO can enable the computation of high-quality motion- and $B0$-inhomogeneity-corrected $R_2^ast$ maps. LEARN-IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of $R_2^ast$ maps, while LEARN-BIO directly performs motion- and $B0$-inhomogeneity-corrected $R_2^ast$ estimation. Both LEARN-IMG and LEARN-BIO jointly process all the available gradient echoes, which enables them to exploit spatial patterns available in the data. The high computational speed of LEARN-BIO is an advantage that can lead to a broader clinical application.



قيم البحث

اقرأ أيضاً

Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique for studying brain activity. During an fMRI session, the subject executes a set of tasks (task-related fMRI study) or no tasks (resting-state fMRI), and a sequence of 3-D brain images is obtained for further analysis. In the course of fMRI, some sources of activation are caused by noise and artifacts. The removal of these sources is essential before the analysis of the brain activations. Deep Neural Network (DNN) architectures can be used for denoising and artifact removal. The main advantage of DNN models is the automatic learning of abstract and meaningful features, given the raw data. This work presents advanced DNN architectures for noise and artifact classification, using both spatial and temporal information in resting-state fMRI sessions. The highest performance is achieved by a voting schema using information from all the domains, with an average accuracy of over 98% and a very good balance between the metrics of sensitivity and specificity (98.5% and 97.5% respectively).
An approach to reduce motion artifacts in Quantitative Susceptibility Mapping using deep learning is proposed. We use an affine motion model with randomly created motion profiles to simulate motion-corrupted QSM images. The simulated QSM image is pai red with its motion-free reference to train a neural network using supervised learning. The trained network is tested on unseen simulated motion-corrupted QSM images, in healthy volunteers and in Parkinsons disease patients. The results show that motion artifacts, such as ringing and ghosting, were successfully suppressed.
A learning-based posterior distribution estimation method, Probabilistic Dipole Inversion (PDI), is proposed to solve quantitative susceptibility mapping (QSM) inverse problem in MRI with uncertainty estimation. A deep convolutional neural network (C NN) is used to represent the multivariate Gaussian distribution as the approximated posterior distribution of susceptibility given the input measured field. In PDI, such CNN is firstly trained on healthy subjects dataset with labels by maximizing the posterior Gaussian distribution loss function as used in Bayesian deep learning. When tested on new dataset without any label, PDI updates the pre-trained network in an unsupervised fashion by minimizing the KL divergence between the approximated posterior distribution represented by CNN and the true posterior distribution given the likelihood distribution from known physical model and prior distribution. Based on our experiments, PDI provides additional uncertainty estimation compared to the conventional MAP approach, meanwhile addressing the potential discrepancy issue of CNN when test data deviates from training dataset.
Eye movements, blinking and other motion during the acquisition of optical coherence tomography (OCT) can lead to artifacts, when processed to OCT angiography (OCTA) images. Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information. The aim of this research is to fill these gaps using a deep generative model for OCT to OCTA image translation relying on a single intact OCT scan. Therefore, a U-Net is trained to extract the angiographic information from OCT patches. At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network. We show that generative models can augment the missing scans. The augmented volumes could then be used for 3-D segmentation or increase the diagnostic value.
171 - Anique Akhtar , Wen Gao , Li Li 2021
Photo-realistic point cloud capture and transmission are the fundamental enablers for immersive visual communication. The coding process of dynamic point clouds, especially video-based point cloud compression (V-PCC) developed by the MPEG standardiza tion group, is now delivering state-of-the-art performance in compression efficiency. V-PCC is based on the projection of the point cloud patches to 2D planes and encoding the sequence as 2D texture and geometry patch sequences. However, the resulting quantization errors from coding can introduce compression artifacts, which can be very unpleasant for the quality of experience (QoE). In this work, we developed a novel out-of-the-loop point cloud geometry artifact removal solution that can significantly improve reconstruction quality without additional bandwidth cost. Our novel framework consists of a point cloud sampling scheme, an artifact removal network, and an aggregation scheme. The point cloud sampling scheme employs a cube-based neighborhood patch extraction to divide the point cloud into patches. The geometry artifact removal network then processes these patches to obtain artifact-removed patches. The artifact-removed patches are then merged together using an aggregation scheme to obtain the final artifact-removed point cloud. We employ 3D deep convolutional feature learning for geometry artifact removal that jointly recovers both the quantization direction and the quantization noise level by exploiting projection and quantization prior. The simulation results demonstrate that the proposed method is highly effective and can considerably improve the quality of the reconstructed point cloud.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا