Do you want to publish a course? Click here

A simple blind-denoising filter inspired by electrically coupled photoreceptors in the retina

69   0   0.0 ( 0 )
 Added by Yang Yue
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Photoreceptors in the retina are coupled by electrical synapses called gap junctions. It has long been established that gap junctions increase the signal-to-noise ratio of photoreceptors. Inspired by electrically coupled photoreceptors, we introduced a simple filter, the PR-filter, with only one variable. On BSD68 dataset, PR-filter showed outstanding performance in SSIM during blind denoising tasks. It also significantly improved the performance of state-of-the-art convolutional neural network blind denosing on non-Gaussian noise. The performance of keeping more details might be attributed to small receptive field of the photoreceptors.



rate research

Read More

We present and study a novel task named Blind Image Decomposition (BID), which requires separating a superimposed image into constituent underlying images in a blind setting, that is, both the source components involved in mixing as well as the mixing mechanism are unknown. For example, rain may consist of multiple components, such as rain streaks, raindrops, snow, and haze. Rainy images can be treated as an arbitrary combination of these components, some of them or all of them. How to decompose superimposed images, like rainy images, into distinct source components is a crucial step towards real-world vision systems. To facilitate research on this new task, we construct three benchmark datasets, including mixed image decomposition across multiple domains, real-scenario deraining, and joint shadow/reflection/watermark removal. Moreover, we propose a simple yet general Blind Image Decomposition Network (BIDeN) to serve as a strong baseline for future work. Experimental results demonstrate the tenability of our benchmarks and the effectiveness of BIDeN. Code and project page are available.
It has been demonstrated many times that the behavior of the human visual system is connected to the statistics of natural images. Since machine learning relies on the statistics of training data as well, the above connection has interesting implications when using perceptual distances (which mimic the behavior of the human visual system) as a loss function. In this paper, we aim to unravel the non-trivial relationship between the probability distribution of the data, perceptual distances, and unsupervised machine learning. To this end, we show that perceptual sensitivity is correlated with the probability of an image in its close neighborhood. We also explore the relation between distances induced by autoencoders and the probability distribution of the data used for training them, as well as how these induced distances are correlated with human perception. Finally, we discuss why perceptual distances might not lead to noticeable gains in performance over standard Euclidean distances in common image processing tasks except when data is scarce and the perceptual distance provides regularization.
QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.
Decomposing an image through Fourier, DCT or wavelet transforms is still a common approach in digital image processing, in number of applications such as denoising. In this context, data-driven dictionaries and in particular exploiting the redundancy withing patches extracted from one or several images allowed important improvements. This paper proposes an original idea of constructing such an image-dependent basis inspired by the principles of quantum many-body physics. The similarity between two image patches is introduced in the formalism through a term akin to interaction terms in quantum mechanics. The main contribution of the paper is thus to introduce this original way of exploiting quantum many-body ideas in image processing, which opens interesting perspectives in image denoising. The potential of the proposed adaptive decomposition is illustrated through image denoising in presence of additive white Gaussian noise, but the method can be used for other types of noise such as image-dependent noise as well. Finally, the results show that our method achieves comparable or slightly better results than existing approaches.
Blind image denoising is an important yet very challenging problem in computer vision due to the complicated acquisition process of real images. In this work we propose a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for blind image denoising. Specifically, an approximate posterior, parameterized by deep neural networks, is presented by taking the intrinsic clean image and noise variances as latent variables conditioned on the input noisy image. This posterior provides explicit parametric forms for all its involved hyper-parameters, and thus can be easily implemented for blind image denoising with automatic noise estimation for the test noisy image. On one hand, as other data-driven deep learning methods, our method, namely variational denoising network (VDN), can perform denoising efficiently due to its explicit form of posterior expression. On the other hand, VDN inherits the advantages of traditional model-driven approaches, especially the good generalization capability of generative models. VDN has good interpretability and can be flexibly utilized to estimate and remove complicated non-i.i.d. noise collected in real scenarios. Comprehensive experiments are performed to substantiate the superiority of our method in blind image denoising.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا