ترغب بنشر مسار تعليمي؟ اضغط هنا

Snapshot compressed sensing: performance bounds and algorithms

94   0   0.0 ( 0 )
 نشر من قبل Shirin Jalali
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Snapshot compressed sensing (CS) refers to compressive imaging systems in which multiple frames are mapped into a single measurement frame. Each pixel in the acquired frame is a noisy linear mapping of the corresponding pixels in the frames that are combined together. While the problem can be cast as a CS problem, due to the very special structure of the sensing matrix, standard CS theory cannot be employed to study such systems. In this paper, a compression-based framework is employed for theoretical analysis of snapshot CS systems. It is shown that this framework leads to two novel, computationally-efficient and theoretically-analyzable compression-based recovery algorithms. The proposed methods are iterative and employ compression codes to define and impose the structure of the desired signal. Theoretical convergence guarantees are derived for both algorithms. In the simulations, it is shown that, in the cases of both noise-free and noisy measurements, combining the proposed algorithms with a customized video compression code, designed to exploit nonlocal structures of video frames, significantly improves the state-of-the-art performance.



قيم البحث

اقرأ أيضاً

58 - Shirin Jalali 2018
Quantized maximum a posteriori (Q-MAP) is a recently-proposed Bayesian compressed sensing algorithm that, given the source distribution, recovers $X^n$ from its linear measurements $Y^m=AX^n$, where $Ain R^{mtimes n}$ denotes the known measurement ma trix. On the other hand, Lagrangian minimum entropy pursuit (L-MEP) is a universal compressed sensing algorithm that aims at recovering $X^n$ from its linear measurements $Y^m=AX^n$, without having access to the source distribution. Both Q-MAP and L-MEP provably achieve the minimum required sampling rates, in noiseless cases where such fundamental limits are known. L-MEP is based on minimizing a cost function that consists of a linear combination of the conditional empirical entropy of a potential reconstruction vector and its corresponding measurement error. In this paper, using a first-order linear approximation of the conditional empirical entropy function, L-MEP is connected with Q-MAP. The established connection between L-MEP and Q-MAP leads to variants of Q-MAP which have the same asymptotic performance as Q-MAP in terms of their required sampling rates. Moreover, these variants suggest that Q-MAP is robust to small error in estimating the source distribution. This robustness is theoretically proven and the effect of a non-vanishing estimation error on the required sampling rate is characterized.
Compressed sensing (CS) exploits the sparsity of a signal in order to integrate acquisition and compression. CS theory enables exact reconstruction of a sparse signal from relatively few linear measurements via a suitable nonlinear minimization proce ss. Conventional CS theory relies on vectorial data representation, which results in good compression ratios at the expense of increased computational complexity. In applications involving color images, video sequences, and multi-sensor networks, the data is intrinsically of high-order, and thus more suitably represented in tensorial form. Standard applications of CS to higher-order data typically involve representation of the data as long vectors that are in turn measured using large sampling matrices, thus imposing a huge computational and memory burden. In this chapter, we introduce Generalized Tensor Compressed Sensing (GTCS)--a unified framework for compressed sensing of higher-order tensors which preserves the intrinsic structure of tensorial data with reduced computational complexity at reconstruction. We demonstrate that GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method (GTCS-S) and a parallelizable method (GTCS-P), both capable of recovering a tensor based on noiseless and noisy observations. We then compare the performance of the proposed methods with Kronecker compressed sensing (KCS) and multi-way compressed sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well), is that the achieved compression ratios may be worse than those offered by KCS.
112 - Zhipeng Xue , Junjie Ma , 2017
Turbo compressed sensing (Turbo-CS) is an efficient iterative algorithm for sparse signal recovery with partial orthogonal sensing matrices. In this paper, we extend the Turbo-CS algorithm to solve compressed sensing problems involving more general s ignal structure, including compressive image recovery and low-rank matrix recovery. A main difficulty for such an extension is that the original Turbo-CS algorithm requires prior knowledge of the signal distribution that is usually unavailable in practice. To overcome this difficulty, we propose to redesign the Turbo-CS algorithm by employing a generic denoiser that does not depend on the prior distribution and hence the name denoising-based Turbo-CS (D-Turbo-CS). We then derive the extrinsic information for a generic denoiser by following the Turbo-CS principle. Based on that, we optimize the parametric extrinsic denoisers to minimize the output mean-square error (MSE). Explicit expressions are derived for the extrinsic SURE-LET denoiser used in compressive image denoising and also for the singular value thresholding (SVT) denoiser used in low-rank matrix denoising. We find that the dynamics of D-Turbo-CS can be well described by a scaler recursion called MSE evolution, similar to the case for Turbo-CS. Numerical results demonstrate that D-Turbo-CS considerably outperforms the counterpart algorithms in both reconstruction quality and running time.
This letter investigates the joint recovery of a frequency-sparse signal ensemble sharing a common frequency-sparse component from the collection of their compressed measurements. Unlike conventional arts in compressed sensing, the frequencies follow an off-the-grid formulation and are continuously valued in $leftlbrack 0,1 rightrbrack$. As an extension of atomic norm, the concatenated atomic norm minimization approach is proposed to handle the exact recovery of signals, which is reformulated as a computationally tractable semidefinite program. The optimality of the proposed approach is characterized using a dual certificate. Numerical experiments are performed to illustrate the effectiveness of the proposed approach and its advantage over separate recovery.
Compressed sensing (CS) or sparse signal reconstruction (SSR) is a signal processing technique that exploits the fact that acquired data can have a sparse representation in some basis. One popular technique to reconstruct or approximate the unknown s parse signal is the iterative hard thresholding (IHT) which however performs very poorly under non-Gaussian noise conditions or in the face of outliers (gross errors). In this paper, we propose a robust IHT method based on ideas from $M$-estimation that estimates the sparse signal and the scale of the error distribution simultaneously. The method has a negligible performance loss compared to IHT under Gaussian noise, but superior performance under heavy-tailed non-Gaussian noise conditions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا