No Arabic abstract
Evaluating the statistical dimension is a common tool to determine the asymptotic phase transition in compressed sensing problems with Gaussian ensemble. Unfortunately, the exact evaluation of the statistical dimension is very difficult and it has become standard to replace it with an upper-bound. To ensure that this technique is suitable, [1] has introduced an upper-bound on the gap between the statistical dimension and its approximation. In this work, we first show that the error bound in [1] in some low-dimensional models such as total variation and $ell_1$ analysis minimization becomes poorly large. Next, we develop a new error bound which significantly improves the estimation gap compared to [1]. In particular, unlike the bound in [1] that is not applicable to settings with overcomplete dictionaries, our bound exhibits a decaying behavior in such cases.
In this work, we analyze the failing sets of the interval-passing algorithm (IPA) for compressed sensing. The IPA is an efficient iterative algorithm for reconstructing a k-sparse nonnegative n-dimensional real signal x from a small number of linear measurements y. In particular, we show that the IPA fails to recover x from y if and only if it fails to recover a corresponding binary vector of the same support, and also that only positions of nonzero values in the measurement matrix are of importance for success of recovery. Based on this observation, we introduce termatiko sets and show that the IPA fails to fully recover x if and only if the support of x contains a nonempty termatiko set, thus giving a complete (graph-theoretic) description of the failing sets of the IPA. Finally, we present an extensive numerical study showing that in many cases there exist termatiko sets of size strictly smaller than the stopping distance of the binary measurement matrix; even as low as half the stopping distance in some cases.
We present improved sampling complexity bounds for stable and robust sparse recovery in compressed sensing. Our unified analysis based on l1 minimization encompasses the case where (i) the measurements are block-structured samples in order to reflect the structured acquisition that is often encountered in applications; (ii) the signal has an arbitrary structured sparsity, by results depending on its support S. Within this framework and under a random sign assumption, the number of measurements needed by l1 minimization can be shown to be of the same order than the one required by an oracle least-squares estimator. Moreover, these bounds can be minimized by adapting the variable density sampling to a given prior on the signal support and to the coherence of the measurements. We illustrate both numerically and analytically that our results can be successfully applied to recover Haar wavelet coefficients that are sparse in levels from random Fourier measurements in dimension one and two, which can be of particular interest in imaging problems. Finally, a preliminary numerical investigation shows the potential of this theory for devising adaptive sampling strategies in sparse polynomial approximation.
This letter investigates the joint recovery of a frequency-sparse signal ensemble sharing a common frequency-sparse component from the collection of their compressed measurements. Unlike conventional arts in compressed sensing, the frequencies follow an off-the-grid formulation and are continuously valued in $leftlbrack 0,1 rightrbrack$. As an extension of atomic norm, the concatenated atomic norm minimization approach is proposed to handle the exact recovery of signals, which is reformulated as a computationally tractable semidefinite program. The optimality of the proposed approach is characterized using a dual certificate. Numerical experiments are performed to illustrate the effectiveness of the proposed approach and its advantage over separate recovery.
Compressed sensing (CS) or sparse signal reconstruction (SSR) is a signal processing technique that exploits the fact that acquired data can have a sparse representation in some basis. One popular technique to reconstruct or approximate the unknown sparse signal is the iterative hard thresholding (IHT) which however performs very poorly under non-Gaussian noise conditions or in the face of outliers (gross errors). In this paper, we propose a robust IHT method based on ideas from $M$-estimation that estimates the sparse signal and the scale of the error distribution simultaneously. The method has a negligible performance loss compared to IHT under Gaussian noise, but superior performance under heavy-tailed non-Gaussian noise conditions.
In this paper, based on a successively accuracy-increasing approximation of the $ell_0$ norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the $ell_0$ norm can be controlled. We prove that the series of the approximations asymptotically coincides with the $ell_1$ and $ell_0$ norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed which leads to a number of weighted $ell_1$ minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.