ترغب بنشر مسار تعليمي؟ اضغط هنا

Two algorithms for compressed sensing of sparse tensors

87   0   0.0 ( 0 )
 نشر من قبل Qun Li
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Compressed sensing (CS) exploits the sparsity of a signal in order to integrate acquisition and compression. CS theory enables exact reconstruction of a sparse signal from relatively few linear measurements via a suitable nonlinear minimization process. Conventional CS theory relies on vectorial data representation, which results in good compression ratios at the expense of increased computational complexity. In applications involving color images, video sequences, and multi-sensor networks, the data is intrinsically of high-order, and thus more suitably represented in tensorial form. Standard applications of CS to higher-order data typically involve representation of the data as long vectors that are in turn measured using large sampling matrices, thus imposing a huge computational and memory burden. In this chapter, we introduce Generalized Tensor Compressed Sensing (GTCS)--a unified framework for compressed sensing of higher-order tensors which preserves the intrinsic structure of tensorial data with reduced computational complexity at reconstruction. We demonstrate that GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method (GTCS-S) and a parallelizable method (GTCS-P), both capable of recovering a tensor based on noiseless and noisy observations. We then compare the performance of the proposed methods with Kronecker compressed sensing (KCS) and multi-way compressed sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well), is that the achieved compression ratios may be worse than those offered by KCS.

قيم البحث

اقرأ أيضاً

93 - Shirin Jalali , Xin Yuan 2018
Snapshot compressed sensing (CS) refers to compressive imaging systems in which multiple frames are mapped into a single measurement frame. Each pixel in the acquired frame is a noisy linear mapping of the corresponding pixels in the frames that are combined together. While the problem can be cast as a CS problem, due to the very special structure of the sensing matrix, standard CS theory cannot be employed to study such systems. In this paper, a compression-based framework is employed for theoretical analysis of snapshot CS systems. It is shown that this framework leads to two novel, computationally-efficient and theoretically-analyzable compression-based recovery algorithms. The proposed methods are iterative and employ compression codes to define and impose the structure of the desired signal. Theoretical convergence guarantees are derived for both algorithms. In the simulations, it is shown that, in the cases of both noise-free and noisy measurements, combining the proposed algorithms with a customized video compression code, designed to exploit nonlocal structures of video frames, significantly improves the state-of-the-art performance.
150 - Xiaochen Zhao , Wei Dai 2014
This paper studies the problem of power allocation in compressed sensing when different components in the unknown sparse signal have different probability to be non-zero. Given the prior information of the non-uniform sparsity and the total power bud get, we are interested in how to optimally allocate the power across the columns of a Gaussian random measurement matrix so that the mean squared reconstruction error is minimized. Based on the state evolution technique originated from the work by Donoho, Maleki, and Montanari, we revise the so called approximate message passing (AMP) algorithm for the reconstruction and quantify the MSE performance in the asymptotic regime. Then the closed form of the optimal power allocation is obtained. The results show that in the presence of measurement noise, uniform power allocation, which results in the commonly used Gaussian random matrix with i.i.d. entries, is not optimal for non-uniformly sparse signals. Empirical results are presented to demonstrate the performance gain.
58 - Shirin Jalali 2018
Quantized maximum a posteriori (Q-MAP) is a recently-proposed Bayesian compressed sensing algorithm that, given the source distribution, recovers $X^n$ from its linear measurements $Y^m=AX^n$, where $Ain R^{mtimes n}$ denotes the known measurement ma trix. On the other hand, Lagrangian minimum entropy pursuit (L-MEP) is a universal compressed sensing algorithm that aims at recovering $X^n$ from its linear measurements $Y^m=AX^n$, without having access to the source distribution. Both Q-MAP and L-MEP provably achieve the minimum required sampling rates, in noiseless cases where such fundamental limits are known. L-MEP is based on minimizing a cost function that consists of a linear combination of the conditional empirical entropy of a potential reconstruction vector and its corresponding measurement error. In this paper, using a first-order linear approximation of the conditional empirical entropy function, L-MEP is connected with Q-MAP. The established connection between L-MEP and Q-MAP leads to variants of Q-MAP which have the same asymptotic performance as Q-MAP in terms of their required sampling rates. Moreover, these variants suggest that Q-MAP is robust to small error in estimating the source distribution. This robustness is theoretically proven and the effect of a non-vanishing estimation error on the required sampling rate is characterized.
Compressed sensing (CS) or sparse signal reconstruction (SSR) is a signal processing technique that exploits the fact that acquired data can have a sparse representation in some basis. One popular technique to reconstruct or approximate the unknown s parse signal is the iterative hard thresholding (IHT) which however performs very poorly under non-Gaussian noise conditions or in the face of outliers (gross errors). In this paper, we propose a robust IHT method based on ideas from $M$-estimation that estimates the sparse signal and the scale of the error distribution simultaneously. The method has a negligible performance loss compared to IHT under Gaussian noise, but superior performance under heavy-tailed non-Gaussian noise conditions.
In this paper, based on a successively accuracy-increasing approximation of the $ell_0$ norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the $ell_0$ norm can be controlled. We prove that the series of the approximations asymptotically coincides with the $ell_1$ and $ell_0$ norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed which leads to a number of weighted $ell_1$ minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا