ترغب بنشر مسار تعليمي؟ اضغط هنا

An Adversarial Learning Based Approach for Unknown View Tomographic Reconstruction

99   0   0.0 ( 0 )
 نشر من قبل Mona Zehni
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The goal of 2D tomographic reconstruction is to recover an image given its projection lines from various views. It is often presumed that projection angles associated with the projection lines are known in advance. Under certain situations, however, these angles are known only approximately or are completely unknown. It becomes more challenging to reconstruct the image from a collection of random projection lines. We propose an adversarial learning based approach to recover the image and the projection angle distribution by matching the empirical distribution of the measurements with the generated data. Fitting the distributions is achieved through solving a min-max game between a generator and a critic based on Wasserstein generative adversarial network structure. To accommodate the update of the projection angle distribution through gradient back propagation, we approximate the loss using the Gumbel-Softmax reparameterization of samples from discrete distributions. Our theoretical analysis verifies the unique recovery of the image and the projection distribution up to a rotation and reflection upon convergence. Our extensive numerical experiments showcase the potential of our method to accurately recover the image and the projection angle distribution under noise contamination.

قيم البحث

اقرأ أيضاً

67 - Mona Zehni , Zhizhen Zhao 2021
Tomographic reconstruction recovers an unknown image given its projections from different angles. State-of-the-art methods addressing this problem assume the angles associated with the projections are known a-priori. Given this knowledge, the reconst ruction process is straightforward as it can be formulated as a convex problem. Here, we tackle a more challenging setting: 1) the projection angles are unknown, 2) they are drawn from an unknown probability distribution. In this set-up our goal is to recover the image and the projection angle distribution using an unsupervised adversarial learning approach. For this purpose, we formulate the problem as a distribution matching between the real projection lines and the generated ones from the estimated image and projection distribution. This is then solved by reaching the equilibrium in a min-max game between a generator and a discriminator. Our novel contribution is to recover the unknown projection distribution and the image simultaneously using adversarial learning. To accommodate this, we use Gumbel-softmax approximation of samples from categorical distribution to approximate the generators loss as a function of the unknown image and the projection distribution. Our approach can be generalized to different inverse problems. Our simulation results reveal the ability of our method in successfully recovering the image and the projection distribution in various settings.
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem with practical implications in medical and biological imaging, manufacturing, automation, and environmental and food security. Regula rizing priors are necessary to reduce artifacts by improving the condition of such problems. Recently, it was shown that one effective way to learn the priors for strongly scattering yet highly structured 3D objects, e.g. layered and Manhattan, is by a static neural network [Goy et al, Proc. Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically different approach where the collection of raw images from multiple angles is viewed analogously to a dynamical system driven by the object-dependent forward scattering operator. The sequence index in angle of illumination plays the role of discrete time in the dynamical system analogy. Thus, the imaging problem turns into a problem of nonlinear system identification, which also suggests dynamical learning as better fit to regularize the reconstructions. We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the fundamental building block. Through comprehensive comparison of several quantitative metrics, we show that the dynamic method improves upon previous static approaches with fewer artifacts and better overall reconstruction fidelity.
Tomographic image reconstruction with deep learning is an emerging field, but a recent landmark study reveals that several deep reconstruction networks are unstable for computed tomography (CT) and magnetic resonance imaging (MRI). Specifically, thre e kinds of instabilities were reported: (1) strong image artefacts from tiny perturbations, (2) small features missing in a deeply reconstructed image, and (3) decreased imaging performance with increased input data. On the other hand, compressed sensing (CS) inspired reconstruction methods do not suffer from these instabilities because of their built-in kernel awareness. For deep reconstruction to realize its full potential and become a mainstream approach for tomographic imaging, it is thus critically important to meet this challenge by stabilizing deep reconstruction networks. Here we propose an Analytic Compressed Iterative Deep (ACID) framework to address this challenge. ACID synergizes a deep reconstruction network trained on big data, kernel awareness from CS-inspired processing, and iterative refinement to minimize the data residual relative to real measurement. Our study demonstrates that the deep reconstruction using ACID is accurate and stable, and sheds light on the converging mechanism of the ACID iteration under a Bounded Relative Error Norm (BREN) condition. In particular, the study shows that ACID-based reconstruction is resilient against adversarial attacks, superior to classic sparsity-regularized reconstruction alone, and eliminates the three kinds of instabilities. We anticipate that this integrative data-driven approach will help promote development and translation of deep tomographic image reconstruction networks into clinical applications.
354 - Siqi Ye , Yong Long , Il Yong Chun 2020
This paper applies the recent fast iterative neural network framework, Momentum-Net, using appropriate models to low-dose X-ray computed tomography (LDCT) image reconstruction. At each layer of the proposed Momentum-Net, the model-based image reconst ruction module solves the majorized penalized weighted least-square problem, and the image refining module uses a four-layer convolutional neural network (CNN). Experimental results with the NIH AAPM-Mayo Clinic Low Dose CT Grand Challenge dataset show that the proposed Momentum-Net architecture significantly improves image reconstruction accuracy, compared to a state-of-the-art noniterative image denoising deep neural network (NN), WavResNet (in LDCT). We also investigated the spectral normalization technique that applies to image refining NN learning to satisfy the nonexpansive NN property; however, experimental results show that this does not improve the image reconstruction performance of Momentum-Net.
Magnetic resonance imaging (MRI) is widely used in clinical practice, but it has been traditionally limited by its slow data acquisition. Recent advances in compressed sensing (CS) techniques for MRI reduce acquisition time while maintaining high ima ge quality. Whereas classical CS assumes the images are sparse in known analytical dictionaries or transform domains, methods using learned image models for reconstruction have become popular. The model could be pre-learned from datasets, or learned simultaneously with the reconstruction, i.e., blind CS (BCS). Besides the well-known synthesis dictionary model, recent advances in transform learning (TL) provide an efficient alternative framework for sparse modeling in MRI. TL-based methods enjoy numerous advantages including exact sparse coding, transform update, and clustering solutions, cheap computation, and convergence guarantees, and provide high-quality results in MRI compared to popular competing methods. This paper provides a review of some recent works in MRI reconstruction from limited data, with focus on the recent TL-based methods. A unified framework for incorporating various TL-based models is presented. We discuss the connections between transform learning and convolutional or filter bank models and corresponding multi-layer extensions, with connections to deep learning. Finally, we discuss recent trends in MRI, open problems, and future directions for the field.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا