ترغب بنشر مسار تعليمي؟ اضغط هنا

Model-based Reconstruction with Learning: From Unsupervised to Supervised and Beyond

126   0   0.0 ( 0 )
 نشر من قبل Zhishen Huang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Many techniques have been proposed for image reconstruction in medical imaging that aim to recover high-quality images especially from limited or corrupted measurements. Model-based reconstruction methods have been particularly popular (e.g., in magnetic resonance imaging and tomographic modalities) and exploit models of the imaging systems physics together with statistical models of measurements, noise and often relatively simple object priors or regularizers. For example, sparsity or low-rankness based regularizers have been widely used for image reconstruction from limited data such as in compressed sensing. Learning-based approaches for image reconstruction have garnered much attention in recent years and have shown promise across biomedical imaging applications. These methods include synthesis dictionary learning, sparsifying transform learning, and different forms of deep learning involving complex neural networks. We briefly discuss classical model-based reconstruction methods and then review reconstruction methods at the intersection of model-based and learning-based paradigms in detail. This review includes many recent methods based on unsupervised learning, and supervised learning, as well as a framework to combine multiple types of learned models together.



قيم البحث

اقرأ أيضاً

220 - Zhipeng Li , Siqi Ye , Yong Long 2019
Recent years have witnessed growing interest in machine learning-based models and techniques for low-dose X-ray CT (LDCT) imaging tasks. The methods can typically be categorized into supervised learning methods and unsupervised or model-based learnin g methods. Supervised learning methods have recently shown success in image restoration tasks. However, they often rely on large training sets. Model-based learning methods such as dictionary or transform learning do not require large or paired training sets and often have good generalization properties, since they learn general properties of CT image sets. Recent works have shown the promising reconstruction performance of methods such as PWLS-ULTRA that rely on clustering the underlying (reconstructed) image patches into a learned union of transforms. In this paper, we propose a new Supervised-UnsuPERvised (SUPER) reconstruction framework for LDCT image reconstruction that combines the benefits of supervised learning methods and (unsupervised) transform learning-based methods such as PWLS-ULTRA that involve highly image-adaptive clustering. The SUPER model consists of several layers, each of which includes a deep network learned in a supervised manner and an unsupervised iterative method that involves image-adaptive components. The SUPER reconstruction algorithms are learned in a greedy manner from training data. The proposed SUPER learning methods dramatically outperform both the constituent supervised learning-based networks and iterative algorithms for LDCT, and use much fewer iterations in the iterative reconstruction modules.
The key idea of the state-of-the-art VAE-based unsupervised representation disentanglement methods is to minimize the total correlation of the latent variable distributions. However, it has been proved that VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias. In this paper, we address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias. More specifically, inspired by the nth dihedral group (the permutation group for regular polygons), we propose a specific form of the definition and prove its two equivalent conditions: isomorphism and the constancy of permutations. We further provide an implementation of isomorphism based on two Group constraints: the Abel constraint for the exchangeability and Order constraint for the cyclicity. We then convert them into a self-supervised training loss that can be incorporated into VAE-based models to bridge their gaps from the Group Theory based definition. We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method. Compared to the original models, the Groupidied VAEs consistently achieve better mean performance with smaller variances, and make meaningful dimensions controllable.
We focus on the problem of training convolutional neural networks on gigapixel histopathology images to predict image-level targets. For this purpose, we extend Neural Image Compression (NIC), an image compression framework that reduces the dimension ality of these images using an encoder network trained unsupervisedly. We propose to train this encoder using supervised multitask learning (MTL) instead. We applied the proposed MTL NIC to two histopathology datasets and three tasks. First, we obtained state-of-the-art results in the Tumor Proliferation Assessment Challenge of 2016 (TUPAC16). Second, we successfully classified histopathological growth patterns in images with colorectal liver metastasis (CLM). Third, we predicted patient risk of death by learning directly from overall survival in the same CLM data. Our experimental results suggest that the representations learned by the MTL objective are: (1) highly specific, due to the supervised training signal, and (2) transferable, since the same features perform well across different tasks. Additionally, we trained multiple encoders with different training objectives, e.g. unsupervised and variants of MTL, and observed a positive correlation between the number of tasks in MTL and the system performance on the TUPAC16 dataset.
80 - Ankit Raj , Yoram Bresler , Bo Li 2020
Deep-learning-based methods for different applications have been shown vulnerable to adversarial examples. These examples make deployment of such models in safety-critical tasks questionable. Use of deep neural networks as inverse problem solvers has generated much excitement for medical imaging including CT and MRI, but recently a similar vulnerability has also been demonstrated for these tasks. We show that for such inverse problem solvers, one should analyze and study the effect of adversaries in the measurement-space, instead of the signal-space as in previous work. In this paper, we propose to modify the training strategy of end-to-end deep-learning-based inverse problem solvers to improve robustness. We introduce an auxiliary network to generate adversarial examples, which is used in a min-max formulation to build robust image reconstruction networks. Theoretically, we show for a linear reconstruction scheme the min-max formulation results in a singular-value(s) filter regularized solution, which suppresses the effect of adversarial examples occurring because of ill-conditioning in the measurement matrix. We find that a linear network using the proposed min-max learning scheme indeed converges to the same solution. In addition, for non-linear Compressed Sensing (CS) reconstruction using deep networks, we show significant improvement in robustness using the proposed approach over other methods. We complement the theory by experiments for CS on two different datasets and evaluate the effect of increasing perturbations on trained networks. We find the behavior for ill-conditioned and well-conditioned measurement matrices to be qualitatively different.
81 - Yujin Chen , Zhigang Tu , Di Kang 2021
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity. To reliably reconstruct a 3D hand from a monocular image, most state-of-the-art methods heavily rely on 3D annotations at the training stage, but obtaining 3D annotations is expensive. To alleviate reliance on labeled training data, we propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint. Specifically, we obtain geometric cues from the input image through easily accessible 2D detected keypoints. To learn an accurate hand reconstruction model from these noisy geometric cues, we utilize the consistency between 2D and 3D representations and propose a set of novel losses to rationalize outputs of the neural network. For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations. Our experiments show that the proposed method achieves comparable performance with recent fully-supervised methods while using fewer supervision data.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا