No Arabic abstract
With the advent of interferometric instruments with 4 telescopes at the VLTI and 6 telescopes at CHARA, the scientific possibility arose to routinely obtain milli-arcsecond scale images of the observed targets. Such an image reconstruction process is typically performed in a Bayesian framework where the function to minimize is made of two terms: the datalikelihood and the Bayesian prior. This prior should be based on our prior knowledge of the observed source. Up to now,this prior was chosen from a set of generic and arbitrary functions, such as total variation for example. Here, we present an image reconstruction framework using generative adversarial networks where the Bayesian prior is defined using state-of-the-art radiative transfer models of the targeted objects. We validate this new image reconstruction algorithm on synthetic data with added noise. The generated images display a drastic reduction of artefacts and allow a more straight forward astrophysical interpretation. The results can be seen as a first illustration of how neural networks can provide significant improvements to the image reconstruction of a variety of astrophysical sources.
This paper presents a novel deformable registration framework, leveraging an image prior specified through a denoising function, for severely noise-corrupted placental images. Recent work on plug-and-play (PnP) priors has shown the state-of-the-art performance of reconstruction algorithms under such priors in a range of imaging applications. Integration of powerful image denoisers into advanced registration methods provides our model with a flexibility to accommodate datasets that have low signal-to-noise ratios (SNRs). We demonstrate the performance of our method under a wide variety of denoising models in the context of diffeomorphic image registration. Experimental results show that our model substantially improves the accuracy of spatial alignment in applications of 3D in-utero diffusion-weighted MR images (DW-MRI) that suffer from low SNR and large spatial transformations.
Image restoration has seen great progress in the last years thanks to the advances in deep neural networks. Most of these existing techniques are trained using full supervision with suitable image pairs to tackle a specific degradation. However, in a blind setting with unknown degradations this is not possible and a good prior remains crucial. Recently, neural network based approaches have been proposed to model such priors by leveraging either denoising autoencoders or the implicit regularization captured by the neural network structure itself. In contrast to this, we propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation. By expressing the MAP optimization process in the latent space through the learned bijective mapping, we are able to obtain solutions through gradient descent. To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems. Furthermore, we present experimental results for a number of different degradations on data sets varying in complexity and show competitive results when comparing with the deep image prior approach.
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. However, such simplistic priors are unlikely to either accurately reflect our true beliefs about the weight distributions, or to give optimal performance. We study summary statistics of neural network weights in different networks trained using SGD. We find that fully connected networks (FCNNs) display heavy-tailed weight distributions, while convolutional neural network (CNN) weights display strong spatial correlations. Building these observations into the respective priors leads to improved performance on a variety of image classification datasets. Moreover, we find that these priors also mitigate the cold posterior effect in FCNNs, while in CNNs we see strong improvements at all temperatures, and hence no reduction in the cold posterior effect.
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this work, we trained a deep residual convolutional neural network to improve PET image quality by using the existing inter-patient information. An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool. We formulate the objective function as a constraint optimization problem and solve it using the alternating direction method of multipliers (ADMM) algorithm. Both simulation data and hybrid real data are used to evaluate the proposed method. Quantification results show that our proposed iterative neural network method can outperform the neural network denoising and conventional penalized maximum likelihood methods.
Multimode fibers (MMFs) have the potential to carry complex images for endoscopy and related applications, but decoding the complex speckle patterns produced by mode-mixing and modal dispersion in MMFs is a serious challenge. Several groups have recently shown that convolutional neural networks (CNNs) can be trained to perform high-fidelity MMF image reconstruction. We find that a considerably simpler neural network architecture, the single hidden layer dense neural network, performs at least as well as previously-used CNNs in terms of image reconstruction fidelity, and is superior in terms of training time and computing resources required. The trained networks can accurately reconstruct MMF images collected over a week after the cessation of the training set, with the dense network performing as well as the CNN over the entire period.