ترغب بنشر مسار تعليمي؟ اضغط هنا

Successive Concave Sparsity Approximation for Compressed Sensing

299   0   0.0 ( 0 )
 نشر من قبل Mohammadreza Malek-Mohammadi
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, based on a successively accuracy-increasing approximation of the $ell_0$ norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the $ell_0$ norm can be controlled. We prove that the series of the approximations asymptotically coincides with the $ell_1$ and $ell_0$ norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed which leads to a number of weighted $ell_1$ minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.



قيم البحث

اقرأ أيضاً

In this paper, we study a support set reconstruction problem in which the signals of interest are jointly sparse with a common support set, and sampled by joint sparsity model-2 (JSM-2) in the presence of noise. Using mathematical tools, we develop u pper and lower bounds on the failure probability of support set reconstruction in terms of the sparsity, the ambient dimension, the minimum signal to noise ratio, the number of measurement vectors and the number of measurements. These bounds can be used to provide a guideline to determine the system parameters in various applications of compressed sensing with noisy JSM-2. Based on the bounds, we develop necessary and sufficient conditions for reliable support set reconstruction. We interpret these conditions to give theoretical explanations about the benefits enabled by joint sparsity structure in noisy JSM-2. We compare our sufficient condition with the existing result of noisy multiple measurement vectors model (MMV). As a result, we show that noisy JSM-2 may require less number of measurements than noisy MMV for reliable support set reconstruction.
In this work, we consider compressed sensing reconstruction from $M$ measurements of $K$-sparse structured signals which do not possess a writable correlation model. Assuming that a generative statistical model, such as a Boltzmann machine, can be tr ained in an unsupervised manner on example signals, we demonstrate how this signal model can be used within a Bayesian framework of signal reconstruction. By deriving a message-passing inference for general distribution restricted Boltzmann machines, we are able to integrate these inferred signal models into approximate message passing for compressed sensing reconstruction. Finally, we show for the MNIST dataset that this approach can be very effective, even for $M < K$.
Compressed sensing (CS) or sparse signal reconstruction (SSR) is a signal processing technique that exploits the fact that acquired data can have a sparse representation in some basis. One popular technique to reconstruct or approximate the unknown s parse signal is the iterative hard thresholding (IHT) which however performs very poorly under non-Gaussian noise conditions or in the face of outliers (gross errors). In this paper, we propose a robust IHT method based on ideas from $M$-estimation that estimates the sparse signal and the scale of the error distribution simultaneously. The method has a negligible performance loss compared to IHT under Gaussian noise, but superior performance under heavy-tailed non-Gaussian noise conditions.
112 - Zhipeng Xue , Junjie Ma , 2017
Turbo compressed sensing (Turbo-CS) is an efficient iterative algorithm for sparse signal recovery with partial orthogonal sensing matrices. In this paper, we extend the Turbo-CS algorithm to solve compressed sensing problems involving more general s ignal structure, including compressive image recovery and low-rank matrix recovery. A main difficulty for such an extension is that the original Turbo-CS algorithm requires prior knowledge of the signal distribution that is usually unavailable in practice. To overcome this difficulty, we propose to redesign the Turbo-CS algorithm by employing a generic denoiser that does not depend on the prior distribution and hence the name denoising-based Turbo-CS (D-Turbo-CS). We then derive the extrinsic information for a generic denoiser by following the Turbo-CS principle. Based on that, we optimize the parametric extrinsic denoisers to minimize the output mean-square error (MSE). Explicit expressions are derived for the extrinsic SURE-LET denoiser used in compressive image denoising and also for the singular value thresholding (SVT) denoiser used in low-rank matrix denoising. We find that the dynamics of D-Turbo-CS can be well described by a scaler recursion called MSE evolution, similar to the case for Turbo-CS. Numerical results demonstrate that D-Turbo-CS considerably outperforms the counterpart algorithms in both reconstruction quality and running time.
A signature result in compressed sensing is that Gaussian random sampling achieves stable and robust recovery of sparse vectors under optimal conditions on the number of measurements. However, in the context of image reconstruction, it has been exten sively documented that sampling strategies based on Fourier measurements outperform this purportedly optimal approach. Motivated by this seeming paradox, we investigate the problem of optimal sampling for compressed sensing. Rigorously combining the theories of wavelet approximation and infinite-dimensional compressed sensing, our analysis leads to new error bounds in terms of the total number of measurements $m$ for the approximation of piecewise $alpha$-H{o}lder functions. Our theoretical findings suggest that Fourier sampling outperforms random Gaussian sampling when the Holder exponent $alpha$ is large enough. Moreover, we establish a provably optimal sampling strategy. This work is an important first step towards the resolution of the claimed paradox, and provides a clear theoretical justification for the practical success of compressed sensing techniques in imaging problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا