ترغب بنشر مسار تعليمي؟ اضغط هنا

Unsupervised-learning-based method for chest MRI-CT transformation using structure constrained unsupervised generative attention networks

110   0   0.0 ( 0 )
 نشر من قبل Hidetoshi Matsuo
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Hidetoshi Matsuo




اسأل ChatGPT حول البحث

The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner facilitates the simultaneous acquisition of metabolic information via PET and morphological information with high soft-tissue contrast using MRI. Although PET/MRI facilitates the capture of high-accuracy fusion images, its major drawback can be attributed to the difficulty encountered when performing attenuation correction, which is necessary for quantitative PET evaluation. The combined PET/MRI scanning requires the generation of attenuation-correction maps from MRI owing to no direct relationship between the gamma-ray attenuation information and MRIs. While MRI-based bone-tissue segmentation can be readily performed for the head and pelvis regions, the realization of accurate bone segmentation via chest CT generation remains a challenging task. This can be attributed to the respiratory and cardiac motions occurring in the chest as well as its anatomically complicated structure and relatively thin bone cortex. This paper presents a means to minimise the anatomical structural changes without human annotation by adding structural constraints using a modality-independent neighbourhood descriptor (MIND) to a generative adversarial network (GAN) that can transform unpaired images. The results obtained in this study revealed the proposed U-GAT-IT + MIND approach to outperform all other competing approaches. The findings of this study hint towards possibility of synthesising clinically acceptable CT images from chest MRI without human annotation, thereby minimising the changes in the anatomical structure.



قيم البحث

اقرأ أيضاً

Deep learning-based image reconstruction methods have achieved promising results across multiple MRI applications. However, most approaches require large-scale fully-sampled ground truth data for supervised training. Acquiring fully-sampled data is o ften either difficult or impossible, particularly for dynamic contrast enhancement (DCE), 3D cardiac cine, and 4D flow. We present a deep learning framework for MRI reconstruction without any fully-sampled data using generative adversarial networks. We test the proposed method in two scenarios: retrospectively undersampled fast spin echo knee exams and prospectively undersampled abdominal DCE. The method recovers more anatomical structure compared to conventional methods.
Stereo image pairs encode 3D scene cues into stereo correspondences between the left and right images. To exploit 3D cues within stereo images, recent CNN based methods commonly use cost volume techniques to capture stereo correspondence over large d isparities. However, since disparities can vary significantly for stereo cameras with different baselines, focal lengths and resolutions, the fixed maximum disparity used in cost volume techniques hinders them to handle different stereo image pairs with large disparity variations. In this paper, we propose a generic parallax-attention mechanism (PAM) to capture stereo correspondence regardless of disparity variations. Our PAM integrates epipolar constraints with attention mechanism to calculate feature similarities along the epipolar line to capture stereo correspondence. Based on our PAM, we propose a parallax-attention stereo matching network (PASMnet) and a parallax-attention stereo image super-resolution network (PASSRnet) for stereo matching and stereo image super-resolution tasks. Moreover, we introduce a new and large-scale dataset named Flickr1024 for stereo image super-resolution. Experimental results show that our PAM is generic and can effectively learn stereo correspondence under large disparity variations in an unsupervised manner. Comparative results show that our PASMnet and PASSRnet achieve the state-of-the-art performance.
This paper introduces a novel approach for unsupervised object co-localization using Generative Adversarial Networks (GANs). GAN is a powerful tool that can implicitly learn unknown data distributions in an unsupervised manner. From the observation t hat GAN discriminator is highly influenced by pixels where objects appear, we analyze the internal layers of discriminator and visualize the activated pixels. Our important finding is that high image diversity of GAN, which is a main goal in GAN research, is ironically disadvantageous for object localization, because such discriminators focus not only on the target object, but also on the various objects, such as background objects. Based on extensive evaluations and experimental studies, we show the image diversity and localization performance have a negative correlation. In addition, our approach achieves meaningful accuracy for unsupervised object co-localization using publicly available benchmark datasets, even comparable to state-of-the-art weakly-supervised approach.
Self-training based unsupervised domain adaptation (UDA) has shown great potential to address the problem of domain shift, when applying a trained deep learning model in a source domain to unlabeled target domains. However, while the self-training UD A has demonstrated its effectiveness on discriminative tasks, such as classification and segmentation, via the reliable pseudo-label selection based on the softmax discrete histogram, the self-training UDA for generative tasks, such as image synthesis, is not fully investigated. In this work, we propose a novel generative self-training (GST) UDA framework with continuous value prediction and regression objective for cross-domain image synthesis. Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning. The fast test-time adaptation is achieved by a round-based alternative optimization scheme. We validated our framework on the tagged-to-cine magnetic resonance imaging (MRI) synthesis problem, where datasets in the source and target domains were acquired from different scanners or centers. Extensive validations were carried out to verify our framework against popular adversarial training UDA methods. Results show that our GST, with tagged MRI of test subjects in new target domains, improved the synthesis quality by a large margin, compared with the adversarial training UDA methods.
Deep learning has achieved good success in cardiac magnetic resonance imaging (MRI) reconstruction, in which convolutional neural networks (CNNs) learn a mapping from the undersampled k-space to the fully sampled images. Although these deep learning methods can improve the reconstruction quality compared with iterative methods without requiring complex parameter selection or lengthy reconstruction time, the following issues still need to be addressed: 1) all these methods are based on big data and require a large amount of fully sampled MRI data, which is always difficult to obtain for cardiac MRI; 2) the effect of coil correlation on reconstruction in deep learning methods for dynamic MR imaging has never been studied. In this paper, we propose an unsupervised deep learning method for multi-coil cine MRI via a time-interleaved sampling strategy. Specifically, a time-interleaved acquisition scheme is utilized to build a set of fully encoded reference data by directly merging the k-space data of adjacent time frames. Then these fully encoded data can be used to train a parallel network for reconstructing images of each coil separately. Finally, the images from each coil are combined via a CNN to implicitly explore the correlations between coils. The comparisons with classic k-t FOCUSS, k-t SLR, L+S and KLR methods on in vivo datasets show that our method can achieve improved reconstruction results in an extremely short amount of time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا