No Arabic abstract
Decomposing an image through Fourier, DCT or wavelet transforms is still a common approach in digital image processing, in number of applications such as denoising. In this context, data-driven dictionaries and in particular exploiting the redundancy withing patches extracted from one or several images allowed important improvements. This paper proposes an original idea of constructing such an image-dependent basis inspired by the principles of quantum many-body physics. The similarity between two image patches is introduced in the formalism through a term akin to interaction terms in quantum mechanics. The main contribution of the paper is thus to introduce this original way of exploiting quantum many-body ideas in image processing, which opens interesting perspectives in image denoising. The potential of the proposed adaptive decomposition is illustrated through image denoising in presence of additive white Gaussian noise, but the method can be used for other types of noise such as image-dependent noise as well. Finally, the results show that our method achieves comparable or slightly better results than existing approaches.
Fully supervised deep-learning based denoisers are currently the most performing image denoising solutions. However, they require clean reference images. When the target noise is complex, e.g. composed of an unknown mixture of primary noises with unknown intensity, fully supervised solutions are limited by the difficulty to build a suited training set for the problem. This paper proposes a gradual denoising strategy that iteratively detects the dominating noise in an image, and removes it using a tailored denoiser. The method is shown to keep up with state of the art blind denoisers on mixture noises. Moreover, noise analysis is demonstrated to guide denoisers efficiently not only on noise type, but also on noise intensity. The method provides an insight on the nature of the encountered noise, and it makes it possible to extend an existing denoiser with new noise nature. This feature makes the method adaptive to varied denoising cases.
Image denoising is a well-known and well studied problem, commonly targeting a minimization of the mean squared error (MSE) between the outcome and the original image. Unfortunately, especially for severe noise levels, such Minimum MSE (MMSE) solutions may lead to blurry output images. In this work we propose a novel stochastic denoising approach that produces viable and high perceptual quality results, while maintaining a small MSE. Our method employs Langevin dynamics that relies on a repeated application of any given MMSE denoiser, obtaining the reconstructed image by effectively sampling from the posterior distribution. Due to its stochasticity, the proposed algorithm can produce a variety of high-quality outputs for a given noisy input, all shown to be legitimate denoising results. In addition, we present an extension of our algorithm for handling the inpainting problem, recovering missing pixels while removing noise from partially given data.
Hyperspectral images (HSIs) have been widely applied in many fields, such as military, agriculture, and environment monitoring. Nevertheless, HSIs commonly suffer from various types of noise during acquisition. Therefore, denoising is critical for HSI analysis and applications. In this paper, we propose a novel blind denoising method for HSIs based on Multi-Stream Denoising Network (MSDNet). Our network consists of the noise estimation subnetwork and denoising subnetwork. In the noise estimation subnetwork, a multiscale fusion module is designed to capture the noise from different scales. Then, the denoising subnetwork is utilized to obtain the final denoising image. The proposed MSDNet can obtain robust noise level estimation, which is capable of improving the performance of HSI denoising. Extensive experiments on HSI dataset demonstrate that the proposed method outperforms four closely related methods.
We present a software package DiracQ, for use in quantum many-body Physics. It is designed for helping with typical algebraic manipulations that arise in quantum Condensed Matter Physics and Nuclear Physics problems, and also in some subareas of Chemistry. DiracQ is invoked within a Mathematica session, and extends the symbolic capabilities of Mathematica by building in standard commutation and anticommutation rules for several objects relevant in many-body Physics. It enables the user to carry out computations such as evaluating the commutators of arbitrary combinations of spin, Bose and Fermi operators defined on a discrete lattice, or the position and momentum operators in the continuum. Some examples from popular systems, such as the Hubbard model, are provided to illustrate the capabilities of the package.
Image denoising is the process of removing noise from noisy images, which is an image domain transferring task, i.e., from a single or several noise level domains to a photo-realistic domain. In this paper, we propose an effective image denoising method by learning two image priors from the perspective of domain alignment. We tackle the domain alignment on two levels. 1) the feature-level prior is to learn domain-invariant features for corrupted images with different level noise; 2) the pixel-level prior is used to push the denoised images to the natural image manifold. The two image priors are based on $mathcal{H}$-divergence theory and implemented by learning classifiers in adversarial training manners. We evaluate our approach on multiple datasets. The results demonstrate the effectiveness of our approach for robust image denoising on both synthetic and real-world noisy images. Furthermore, we show that the feature-level prior is capable of alleviating the discrepancy between different level noise. It can be used to improve the blind denoising performance in terms of distortion measures (PSNR and SSIM), while pixel-level prior can effectively improve the perceptual quality to ensure the realistic outputs, which is further validated by subjective evaluation.