ترغب بنشر مسار تعليمي؟ اضغط هنا

Sparse recovery of noisy data using the Lasso method

143   0   0.0 ( 0 )
 نشر من قبل Eitan Tadmor
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a detailed analysis of the unconstrained $ell_1$-method Lasso method for sparse recovery of noisy data. The data is recovered by sensing its compressed output produced by randomly generated class of observing matrices satisfying a Restricted Isometry Property. We derive a new $ell_1$-error estimate which highlights the dependence on a certain compressiblity threshold: once the computed re-scaled residual crosses that threshold, the error is driven only by the (assumed small) noise and compressiblity. Here we identify the re-scaled residual as a key quantity which drives the error and we derive its sharp lower bound of order square-root of the size of the support of the computed solution.



قيم البحث

اقرأ أيضاً

95 - Wei Zhang , Taejoon Kim 2021
The simultaneous orthogonal matching pursuit (SOMP) is a popular, greedy approach for common support recovery of a row-sparse matrix. The support recovery guarantee of SOMP has been extensively studied under the noiseless scenario. Compared to the no iseless scenario, the performance analysis of noisy SOMP is still nascent, in which only the restricted isometry property (RIP)-based analysis has been studied. In this paper, we present the mutual incoherence property (MIP)-based study for performance analysis of noisy SOMP. Specifically, when noise is bounded, we provide the condition on which the exact support recovery is guaranteed in terms of the MIP. When noise is unbounded, we instead derive a bound on the successful recovery probability (SRP) that depends on the specific distribution of noise. Then we focus on the common case when noise is random Gaussian and show that the lower bound of SRP follows Tracy-Widom law distribution. The analysis reveals the number of measurements, noise level, the number of sparse vectors, and the value of MIP constant that are required to guarantee a predefined recovery performance. Theoretically, we show that the MIP constant of the measurement matrix must increase proportional to the noise standard deviation, and the number of sparse vectors needs to grow proportional to the noise variance. Finally, we extensively validate the derived analysis through numerical simulations.
In this paper, we investigate in a unified way the structural properties of solutions to inverse problems. These solutions are regularized by the generic class of semi-norms defined as a decomposable norm composed with a linear operator, the so-calle d analysis type decomposable prior. This encompasses several well-known analysis-type regularizations such as the discrete total variation (in any dimension), analysis group-Lasso or the nuclear norm. Our main results establish sufficient conditions under which uniqueness and stability to a bounded noise of the regularized solution are guaranteed. Along the way, we also provide a strong sufficient uniqueness result that is of independent interest and goes beyond the case of decomposable norms.
In this paper, we put forth a new joint sparse recovery algorithm called signal space matching pursuit (SSMP). The key idea of the proposed SSMP algorithm is to sequentially investigate the support of jointly sparse vectors to minimize the subspace d istance to the residual space. Our performance guarantee analysis indicates that SSMP accurately reconstructs any row $K$-sparse matrix of rank $r$ in the full row rank scenario if the sampling matrix $mathbf{A}$ satisfies $text{krank}(mathbf{A}) ge K+1$, which meets the fundamental minimum requirement on $mathbf{A}$ to ensure exact recovery. We also show that SSMP guarantees exact reconstruction in at most $K-r+lceil frac{r}{L} rceil$ iterations, provided that $mathbf{A}$ satisfies the restricted isometry property (RIP) of order $L(K-r)+r+1$ with $$delta_{L(K-r)+r+1} < max left { frac{sqrt{r}}{sqrt{K+frac{r}{4}}+sqrt{frac{r}{4}}}, frac{sqrt{L}}{sqrt{K}+1.15 sqrt{L}} right },$$ where $L$ is the number of indices chosen in each iteration. This implies that the requirement on the RIP constant becomes less restrictive when $r$ increases. Such behavior seems to be natural but has not been reported for most of conventional methods. We further show that if $r=1$, then by running more than $K$ iterations, the performance guarantee of SSMP can be improved to $delta_{lfloor 7.8K rfloor} le 0.155$. In addition, we show that under a suitable RIP condition, the reconstruction error of SSMP is upper bounded by a constant multiple of the noise power, which demonstrates the stability of SSMP under measurement noise. Finally, from extensive numerical experiments, we show that SSMP outperforms conventional joint sparse recovery algorithms both in noiseless and noisy scenarios.
269 - Yong Fang , Bormin Huang , 2011
Recovery algorithms play a key role in compressive sampling (CS). Most of current CS recovery algorithms are originally designed for one-dimensional (1D) signal, while many practical signals are two-dimensional (2D). By utilizing 2D separable samplin g, 2D signal recovery problem can be converted into 1D signal recovery problem so that ordinary 1D recovery algorithms, e.g. orthogonal matching pursuit (OMP), can be applied directly. However, even with 2D separable sampling, the memory usage and complexity at the decoder is still high. This paper develops a novel recovery algorithm called 2D-OMP, which is an extension of 1D-OMP. In the 2D-OMP, each atom in the dictionary is a matrix. At each iteration, the decoder projects the sample matrix onto 2D atoms to select the best matched atom, and then renews the weights for all the already selected atoms via the least squares. We show that 2D-OMP is in fact equivalent to 1D-OMP, but it reduces recovery complexity and memory usage significantly. Whats more important, by utilizing the same methodology used in this paper, one can even obtain higher dimensional OMP (say 3D-OMP, etc.) with ease.
In the long-studied problem of combinatorial group testing, one is asked to detect a set of $k$ defective items out of a population of size $n$, using $m ll n$ disjunctive measurements. In the non-adaptive setting, the most widely used combinatorial objects are disjunct and list-disjunct matrices, which define incidence matrices of test schemes. Disjunct matrices allow the identification of the exact set of defectives, whereas list disjunct matrices identify a small superset of the defectives. Apart from the combinatorial guarantees, it is often of key interest to equip measurement designs with efficient decoding algorithms. The most efficient decoders should run in sublinear time in $n$, and ideally near-linear in the number of measurements $m$. In this work, we give several constructions with an optimal number of measurements and near-optimal decoding time for the most fundamental group testing tasks, as well as for central tasks in the compressed sensing and heavy hitters literature. For many of those tasks, the previous measurement-optimal constructions needed time either quadratic in the number of measurements or linear in the universe size. Most of our results are obtained via a clean and novel approach which avoids list-recoverable codes or related complex techniques which were present in almost every state-of-the-art work on efficiently decodable constructions of such objects.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا