ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-Kernel Filtering for Nonstationary Noise: An Extension of Bilateral Filtering Using Image Context

115   0   0.0 ( 0 )
 نشر من قبل Feihong Liu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Bilateral filtering (BF) is one of the most classical denoising filters, however, the manually initialized filtering kernel hampers its adaptivity across images with various characteristics. To deal with image variation (i.e., non-stationary noise), in this paper, we propose multi-kernel filter (MKF) which adapts filtering kernels to specific image characteristics automatically. The design of MKF takes inspiration from adaptive mechanisms of human vision that make full use of information in a visual context. More specifically, for simulating the visual context and its adaptive function, we construct the image context based on which we simulate the contextual impact on filtering kernels. We first design a hierarchically clustering algorithm to generate a hierarchy of large to small coherent image patches, organized as a cluster tree, so that obtain multi-scale image representation. The leaf cluster and corresponding predecessor clusters are used to generate one of multiple range kernels that are capable of catering to image variation. At first, we design a hierarchically clustering framework to generate a hierarchy of large to small coherent image patches that organized as a cluster tree, so that obtain multi-scale image representation, i.e., the image context. Next, a leaf cluster is used to generate one of the multiple kernels, and two corresponding predecessor clusters are used to fine-tune the adopted kernel. Ultimately, the single spatially-invariant kernel in BF becomes multiple spatially-varying ones. We evaluate MKF on two public datasets, BSD300 and BrainWeb which are added integrally-varying noise and spatially-varying noise, respectively. Extensive experiments show that MKF outperforms state-of-the-art filters w.r.t. both mean absolute error and structural similarity.



قيم البحث

اقرأ أيضاً

It is well-known that spatial averaging can be realized (in space or frequency domain) using algorithms whose complexity does not depend on the size or shape of the filter. These fast algorithms are generally referred to as constant-time or O(1) algo rithms in the image processing literature. Along with the spatial filter, the edge-preserving bilateral filter [Tomasi1998] involves an additional range kernel. This is used to restrict the averaging to those neighborhood pixels whose intensity are similar or close to that of the pixel of interest. The range kernel operates by acting on the pixel intensities. This makes the averaging process non-linear and computationally intensive, especially when the spatial filter is large. In this paper, we show how the O(1) averaging algorithms can be leveraged for realizing the bilateral filter in constant-time, by using trigonometric range kernels. This is done by generalizing the idea in [Porikli2008] of using polynomial range kernels. The class of trigonometric kernels turns out to be sufficiently rich, allowing for the approximation of the standard Gaussian bilateral filter. The attractive feature of our approach is that, for a fixed number of terms, the quality of approximation achieved using trigonometric kernels is much superior to that obtained in [Porikli2008] using polynomials.
Image-based localization (IBL) aims to estimate the 6DOF camera pose for a given query image. The camera pose can be computed from 2D-3D matches between a query image and Structure-from-Motion (SfM) models. Despite recent advances in IBL, it remains difficult to simultaneously resolve the memory consumption and match ambiguity problems of large SfM models. In this work, we propose a cascaded parallel filtering method that leverages the feature, visibility and geometry information to filter wrong matches under binary feature representation. The core idea is that we divide the challenging filtering task into two parallel tasks before deriving an auxiliary camera pose for final filtering. One task focuses on preserving potentially correct matches, while another focuses on obtaining high quality matches to facilitate subsequent more powerful filtering. Moreover, our proposed method improves the localization accuracy by introducing a quality-aware spatial reconfiguration method and a principal focal length enhanced pose estimation method. Experimental results on real-world datasets demonstrate that our method achieves very competitive localization performances in a memory-efficient manner.
The role of sparse representations in the context of structured noise filtering is discussed. A strategy, especially conceived so as to address problems of an ill posed nature, is presented. The proposed approach revises and extends the Oblique Match ing Pursuit technique. It is shown that, by working with an orthogonal projection of the signal to be filtered, it is possible to apply orthogonal matching pursuit like strategies in order to accomplish the required signal discrimination
Image inpainting aims to restore the missing regions and make the recovery results identical to the originally complete image, which is different from the common generative task emphasizing the naturalness of generated images. Nevertheless, existing works usually regard it as a pure generation problem and employ cutting-edge generative techniques to address it. The generative networks fill the main missing parts with realistic contents but usually distort the local structures. In this paper, we formulate image inpainting as a mix of two problems, i.e., predictive filtering and deep generation. Predictive filtering is good at preserving local structures and removing artifacts but falls short to complete the large missing regions. The deep generative network can fill the numerous missing pixels based on the understanding of the whole scene but hardly restores the details identical to the original ones. To make use of their respective advantages, we propose the joint predictive filtering and generative network (JPGNet) that contains three branches: predictive filtering & uncertainty network (PFUNet), deep generative network, and uncertainty-aware fusion network (UAFNet). The PFUNet can adaptively predict pixel-wise kernels for filtering-based inpainting according to the input image and output an uncertainty map. This map indicates the pixels should be processed by filtering or generative networks, which is further fed to the UAFNet for a smart combination between filtering and generative results. Note that, our method as a novel framework for the image inpainting problem can benefit any existing generation-based methods. We validate our method on three public datasets, i.e., Dunhuang, Places2, and CelebA, and demonstrate that our method can enhance three state-of-the-art generative methods (i.e., StructFlow, EdgeConnect, and RFRNet) significantly with the slightly extra time cost.
We consider the case of highly noisy data coming from two different antennas, each data set containing a damped signal with the same frequency and decay factor but different amplitude, phase, starting point and noise. Formally, we treat the first dat a set as real numbers and the second one as purely imaginary and we add them together. This complex set of data is analyzed using Pade Approximations applied to its Z-transform. Complex conjugate poles are representative of the signal; other poles represent the noise and this property allows to identify the signal even in strong noise. The product of the residues of the complex conjugate poles is related to the relative phase of the signal in the two channels and is purely imaginary when the signal amplitudes are equal. Examples are presented on the detection of a fabricated gravitational wave burst received by two antennas in the presence of either white or highly colored noise.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا