ترغب بنشر مسار تعليمي؟ اضغط هنا

Sparse Representations for Structured Noise Filtering

125   0   0.0 ( 0 )
 نشر من قبل Laura Rebollo-Neira
 تاريخ النشر 2007
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The role of sparse representations in the context of structured noise filtering is discussed. A strategy, especially conceived so as to address problems of an ill posed nature, is presented. The proposed approach revises and extends the Oblique Matching Pursuit technique. It is shown that, by working with an orthogonal projection of the signal to be filtered, it is possible to apply orthogonal matching pursuit like strategies in order to accomplish the required signal discrimination

قيم البحث

اقرأ أيضاً

The performance of laser-based active sensing has been severely limited by two types of noise: electrical noise, stemming from elements; optical noise, laser jamming from an eavesdropper and background from environment. Conventional methods to filter optical noise take advantage of the differences between signal and noise in time, wavelength, and polarization. However, they may be limited when the noise and signal share the same information on these degrees of freedoms (DoFs). In order to overcome this drawback, we experimentally demonstrate a groundbreaking noise-filtering method by controlling orbital angular momentum (OAM) to distinguish signal from noise. We provide a proof-of-principle experiment and discuss the dependence of azimuthal index of OAM and detection aperture on signal-to-noise ratio (SNR). Our results suggest that using OAM against noise is an efficient method, offering a new route to optical sensing immersed in high-level noise.
Mapping and localization, preferably from a small number of observations, are fundamental tasks in robotics. We address these tasks by combining spatial structure (differentiable mapping) and end-to-end learning in a novel neural network architecture : the Differentiable Mapping Network (DMN). The DMN constructs a spatially structured view-embedding map and uses it for subsequent visual localization with a particle filter. Since the DMN architecture is end-to-end differentiable, we can jointly learn the map representation and localization using gradient descent. We apply the DMN to sparse visual localization, where a robot needs to localize in a new environment with respect to a small number of images from known viewpoints. We evaluate the DMN using simulated environments and a challenging real-world Street View dataset. We find that the DMN learns effective map representations for visual localization. The benefit of spatial structure increases with larger environments, more viewpoints for mapping, and when training data is scarce. Project website: http://sites.google.com/view/differentiable-mapping
We study the problem of recursively recovering a time sequence of sparse vectors, St, from measurements Mt := St + Lt that are corrupted by structured noise Lt which is dense and can have large magnitude. The structure that we require is that Lt shou ld lie in a low dimensional subspace that is either fixed or changes slowly enough; and the eigenvalues of its covariance matrix are clustered. We do not assume any model on the sequence of sparse vectors. Their support sets and their nonzero element values may be either independent or correlated over time (usually in many applications they are correlated). The only thing required is that there be some support change every so often. We introduce a novel solution approach called Recursive Projected Compressive Sensing with cluster-PCA (ReProCS-cPCA) that addresses some of the limitations of earlier work. Under mild assumptions, we show that, with high probability, ReProCS-cPCA can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value.
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modi fications to the standard objective function. These approaches generally assume a simple diagonal Gaussian prior and as a result are not able to reliably disentangle discrete factors of variation. We propose a two-level hierarchical objective to control relative degree of statistical independence between blocks of variables and individual variables within blocks. We derive this objective as a generalization of the evidence lower bound, which allows us to explicitly represent the trade-offs between mutual information between data and representation, KL divergence between representation and prior, and coverage of the support of the empirical data distribution. Experiments on a variety of datasets demonstrate that our objective can not only disentangle discrete variables, but that doing so also improves disentanglement of other variables and, importantly, generalization even to unseen combinations of factors.
A parallel and nested version of a frequency filtering preconditioner is proposed for linear systems corresponding to diffusion equation on a structured grid. The proposed preconditioner is found to be robust with respect to jumps in the diffusion co efficients. The storage requirement for the preconditioner is O(N),where N is number of rows of matrix, hence, a fairly large problem of size more than 42 million unknowns has been solved on a quad core machine with 64GB RAM. The parallelism is achieved using twisted factorization and SIMD operations. The preconditioner achieves a speedup of 3.3 times on a quad core processor clocked at 4.2 GHz, and compared to a well known algebraic multigrid method, it is significantly faster in both setup and solve times for diffusion equations with jumps.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا