ترغب بنشر مسار تعليمي؟ اضغط هنا

Polyblur: Removing mild blur by polynomial reblurring

76   0   0.0 ( 0 )
 نشر من قبل Mauricio Delbracio
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a highly efficient blind restoration method to remove mild blur in natural images. Contrary to the mainstream, we focus on removing slight blur that is often present, damaging image quality and commonly generated by small out-of-focus, lens blur, or slight camera motion. The proposed algorithm first estimates image blur and then compensates for it by combining multiple applications of the estimated blur in a principled way. To estimate blur we introduce a simple yet robust algorithm based on empirical observations about the distribution of the gradient in sharp natural images. Our experiments show that, in the context of mild blur, the proposed method outperforms traditional and modern blind deblurring methods and runs in a fraction of the time. Our method can be used to blindly correct blur before applying off-the-shelf deep super-resolution methods leading to superior results than other highly complex and computationally demanding techniques. The proposed method estimates and removes mild blur from a 12MP image on a modern mobile phone in a fraction of a second.



قيم البحث

اقرأ أيضاً

Reproducing an all-in-focus image from an image with defocus regions is of practical value in many applications, eg, digital photography, and robotics. Using the output of some existing defocus map estimator, existing approaches first segment a de-fo cused image into multiple regions blurred by Gaussian kernels with different variance each, and then de-blur each region using the corresponding Gaussian kernel. In this paper, we proposed a blind deconvolution method specifically designed for removing defocus blurring from an image, by providing effective solutions to two critical problems: 1) suppressing the artifacts caused by segmentation error by introducing an additional variable regularized by weighted $ell_0$-norm; and 2) more accurate defocus kernel estimation using non-parametric symmetry and low-rank based constraints on the kernel. The experiments on real datasets showed the advantages of the proposed method over existing ones, thanks to the effective treatments of the two important issues mentioned above during deconvolution.
In this paper, we propose a novel method for mild cognitive impairment detection based on jointly exploiting the complex network and the neural network paradigm. In particular, the method is based on ensembling different brain structural perspectives with artificial neural networks. On one hand, these perspectives are obtained with complex network measures tailored to describe the altered brain connectivity. In turn, the brain reconstruction is obtained by combining diffusion-weighted imaging (DWI) data to tractography algorithms. On the other hand, artificial neural networks provide a means to learn a mapping from topological properties of the brain to the presence or absence of cognitive decline. The effectiveness of the method is studied on a well-known benchmark data set in order to evaluate if it can provide an automatic tool to support the early disease diagnosis. Also, the effects of balancing issues are investigated to further assess the reliability of the complex network approach to DWI data.
Camera motion deblurring is an important low-level vision task for achieving better imaging quality. When a scene has outliers such as saturated pixels, the captured blurred image becomes more difficult to restore. In this paper, we propose a novel m ethod to handle camera motion blur with outliers. We first propose an edge-aware scale-recurrent network (EASRN) to conduct deblurring. EASRN has a separate deblurring module that removes blur at multiple scales and an upsampling module that fuses different input scales. Then a salient edge detection network is proposed to supervise the training process and constraint the edges restoration. By simulating camera motion and adding various light sources, we can generate blurred images with saturation cutoff. Using the proposed data generation method, our network can learn to deal with outliers effectively. We evaluate our method on public test datasets including the GoPro dataset, Kohlers dataset and Lais dataset. Both objective evaluation indexes and subjective visualization show that our method results in better deblurring quality than other state-of-the-art approaches.
Recent development of Under-Display Camera (UDC) systems provides a true bezel-less and notch-free viewing experience on smartphones (and TV, laptops, tablets), while allowing images to be captured from the selfie camera embedded underneath. In a typ ical UDC system, the microstructure of the semi-transparent organic light-emitting diode (OLED) pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation. Oftentimes, noise, flare, haze, and blur can be observed in UDC images. In this work, we aim to analyze and tackle the aforementioned degradation problems. We define a physics-based image formation model to better understand the degradation. In addition, we utilize one of the worlds first commodity UDC smartphone prototypes to measure the real-world Point Spread Function (PSF) of the UDC system, and provide a model-based data synthesis pipeline to generate realistically degraded images. We specially design a new domain knowledge-enabled Dynamic Skip Connection Network (DISCNet) to restore the UDC images. We demonstrate the effectiveness of our method through extensive experiments on both synthetic and real UDC data. Our physics-based image formation model and proposed DISCNet can provide foundations for further exploration in UDC image restoration, and even for general diffraction artifact removal in a broader sense.
83 - Tie Liu , Mai Xu , Zulin Wang 2019
Rain removal has recently attracted increasing research attention, as it is able to enhance the visibility of rain videos. However, the existing learning based rain removal approaches for videos suffer from insufficient training data, especially when applying deep learning to remove rain. In this paper, we establish a large-scale video database for rain removal (LasVR), which consists of 316 rain videos. Then, we observe from our database that there exist the temporal correlation of clean content and similar patterns of rain across video frames. According to these two observations, we propose a two-stream convolutional long- and short- term memory (ConvLSTM) approach for rain removal in videos. The first stream is composed of the subnet for rain detection, while the second stream is the subnet of rain removal that leverages the features from the rain detection subnet. Finally, the experimental results on both synthetic and real rain videos show the proposed approach performs better than other state-of-the-art approaches.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا