ترغب بنشر مسار تعليمي؟ اضغط هنا

BP-DIP: A Backprojection based Deep Image Prior

56   0   0.0 ( 0 )
 نشر من قبل Jenny Zukerman
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Deep neural networks are a very powerful tool for many computer vision tasks, including image restoration, exhibiting state-of-the-art results. However, the performance of deep learning methods tends to drop once the observation model used in training mismatches the one in test time. In addition, most deep learning methods require vast amounts of training data, which are not accessible in many applications. To mitigate these disadvantages, we propose to combine two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the given degraded image. It does not require any training data and builds on the implicit prior imposed by the CNN architecture; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works. We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.

قيم البحث

اقرأ أيضاً

We present a tomographic imaging technique, termed Deep Prior Diffraction Tomography (DP-DT), to reconstruct the 3D refractive index (RI) of thick biological samples at high resolution from a sequence of low-resolution images collected under angularl y varying illumination. DP-DT processes the multi-angle data using a phase retrieval algorithm that is extended by a deep image prior (DIP), which reparameterizes the 3D sample reconstruction with an untrained, deep generative 3D convolutional neural network (CNN). We show that DP-DT effectively addresses the missing cone problem, which otherwise degrades the resolution and quality of standard 3D reconstruction algorithms. As DP-DT does not require pre-captured data or pre-training, it is not biased towards any particular dataset. Hence, it is a general technique that can be applied to a wide variety of 3D samples, including scenarios in which large datasets for supervised training would be infeasible or expensive. We applied DP-DT to obtain 3D RI maps of bead phantoms and complex biological specimens, both in simulation and experiment, and show that DP-DT produces higher-quality results than standard regularization techniques. We further demonstrate the generality of DP-DT, using two different scattering models, the first Born and multi-slice models. Our results point to the potential benefits of DP-DT for other 3D imaging modalities, including X-ray computed tomography, magnetic resonance imaging, and electron microscopy.
86 - Xueqing Liu , Paul Sajda 2020
Many imaging technologies rely on tomographic reconstruction, which requires solving a multidimensional inverse problem given a finite number of projections. Backprojection is a popular class of algorithm for tomographic reconstruction, however it ty pically results in poor image reconstructions when the projection angles are sparse and/or if the sensors characteristics are not uniform. Several deep learning based algorithms have been developed to solve this inverse problem and reconstruct the image using a limited number of projections. However these algorithms typically require examples of the ground-truth (i.e. examples of reconstructed images) to yield good performance. In this paper, we introduce an unsupervised sparse-view backprojection algorithm, which does not require ground-truth. The algorithm consists of two modules in a generator-projector framework; a convolutional neural network and a spatial transformer network. We evaluated our algorithm using computed tomography (CT) images of the human chest. We show that our algorithm significantly out-performs filtered backprojection when the projection angles are very sparse, as well as when the sensor characteristics vary for different angles. Our approach has practical applications for medical imaging and other imaging modalities (e.g. radar) where sparse and/or non-uniform projections may be acquired due to time or sampling constraints.
Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution. Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets), which typically consist of three steps: (1) up-sampling the LR-HSI, (2) predicting the residual image via a ConvNet, and (3) obtaining the final fused HSI by adding the outputs from first and second steps. Recent methods have leveraged Deep Image Prior (DIP) to up-sample the LR-HSI due to its excellent ability to preserve both spatial and spectral information, without learning from large data sets. However, we observed that the quality of up-sampled HSIs can be further improved by introducing an additional spatial-domain constraint to the conventional spectral-domain energy function. We define our spatial-domain constraint as the $L_1$ distance between the predicted PAN image and the actual PAN image. To estimate the PAN image of the up-sampled HSI, we also propose a learnable spectral response function (SRF). Moreover, we noticed that the residual image between the up-sampled HSI and the reference HSI mainly consists of edge information and very fine structures. In order to accurately estimate fine information, we propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers. We perform experiments on three HSI datasets to demonstrate the superiority of our DIP-HyperKite over the state-of-the-art pansharpening methods. The deployment codes, pre-trained models, and final fusion outputs of our DIP-HyperKite and the methods used for the comparisons will be publicly made available at https://github.com/wgcban/DIP-HyperKite.git.
We introduce a novel deep-learning architecture for image upscaling by large factors (e.g. 4x, 8x) based on examples of pristine high-resolution images. Our target is to reconstruct high-resolution images from their downsca
114 - Ziwen Xu , Beiji Zou , Qing Liu 2020
Retinal image quality assessment is an essential task in the diagnosis of retinal diseases. Recently, there are emerging deep models to grade quality of retinal images. Current state-of-the-arts either directly transfer classification networks origin ally designed for natural images to quality classification of retinal images or introduce extra image quality priors via multiple CNN branches or independent CNNs. This paper proposes a dark and bright channel prior guided deep network for retinal image quality assessment called GuidedNet. Specifically, the dark and bright channel priors are embedded into the start layer of network to improve the discriminate ability of deep features. In addition, we re-annotate a new retinal image quality dataset called RIQA-RFMiD for further validation. Experimental results on a public retinal image quality dataset Eye-Quality and our re-annotated dataset RIQA-RFMiD demonstrate the effectiveness of the proposed GuidedNet.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا