ترغب بنشر مسار تعليمي؟ اضغط هنا

Compressed sensing in the presence of speckle noise

282   0   0.0 ( 0 )
 نشر من قبل Shirin Jalali
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The problem of recovering a structured signal from its linear measurements in the presence of speckle noise is studied. This problem appears in many imaging systems such as synthetic aperture radar and optical coherence tomography. The current acquisition technology oversamples signals and converts the problem into a denoising problem with multiplicative noise. However, this paper explores the possibility of reducing the number of measurements below the ambient dimension of the signal. The sophistications that appear in the study of multiplicative noises have so far impeded theoretical analysis of such problems. This paper aims to present the first theoretical result regarding the recovery of signals from their undersampled measurements under the speckle noise. It is shown that if the signal class is structured, in the sense that the signals can be compressed efficiently, then one can obtain accurate estimates of the signal from fewer measurements than the ambient dimension. We demonstrate the effectiveness of the methods we propose through simulation results.



قيم البحث

اقرأ أيضاً

This letter investigates the joint recovery of a frequency-sparse signal ensemble sharing a common frequency-sparse component from the collection of their compressed measurements. Unlike conventional arts in compressed sensing, the frequencies follow an off-the-grid formulation and are continuously valued in $leftlbrack 0,1 rightrbrack$. As an extension of atomic norm, the concatenated atomic norm minimization approach is proposed to handle the exact recovery of signals, which is reformulated as a computationally tractable semidefinite program. The optimality of the proposed approach is characterized using a dual certificate. Numerical experiments are performed to illustrate the effectiveness of the proposed approach and its advantage over separate recovery.
112 - Zhipeng Xue , Junjie Ma , 2017
Turbo compressed sensing (Turbo-CS) is an efficient iterative algorithm for sparse signal recovery with partial orthogonal sensing matrices. In this paper, we extend the Turbo-CS algorithm to solve compressed sensing problems involving more general s ignal structure, including compressive image recovery and low-rank matrix recovery. A main difficulty for such an extension is that the original Turbo-CS algorithm requires prior knowledge of the signal distribution that is usually unavailable in practice. To overcome this difficulty, we propose to redesign the Turbo-CS algorithm by employing a generic denoiser that does not depend on the prior distribution and hence the name denoising-based Turbo-CS (D-Turbo-CS). We then derive the extrinsic information for a generic denoiser by following the Turbo-CS principle. Based on that, we optimize the parametric extrinsic denoisers to minimize the output mean-square error (MSE). Explicit expressions are derived for the extrinsic SURE-LET denoiser used in compressive image denoising and also for the singular value thresholding (SVT) denoiser used in low-rank matrix denoising. We find that the dynamics of D-Turbo-CS can be well described by a scaler recursion called MSE evolution, similar to the case for Turbo-CS. Numerical results demonstrate that D-Turbo-CS considerably outperforms the counterpart algorithms in both reconstruction quality and running time.
Evaluating the statistical dimension is a common tool to determine the asymptotic phase transition in compressed sensing problems with Gaussian ensemble. Unfortunately, the exact evaluation of the statistical dimension is very difficult and it has be come standard to replace it with an upper-bound. To ensure that this technique is suitable, [1] has introduced an upper-bound on the gap between the statistical dimension and its approximation. In this work, we first show that the error bound in [1] in some low-dimensional models such as total variation and $ell_1$ analysis minimization becomes poorly large. Next, we develop a new error bound which significantly improves the estimation gap compared to [1]. In particular, unlike the bound in [1] that is not applicable to settings with overcomplete dictionaries, our bound exhibits a decaying behavior in such cases.
Compressed sensing (CS) exploits the sparsity of a signal in order to integrate acquisition and compression. CS theory enables exact reconstruction of a sparse signal from relatively few linear measurements via a suitable nonlinear minimization proce ss. Conventional CS theory relies on vectorial data representation, which results in good compression ratios at the expense of increased computational complexity. In applications involving color images, video sequences, and multi-sensor networks, the data is intrinsically of high-order, and thus more suitably represented in tensorial form. Standard applications of CS to higher-order data typically involve representation of the data as long vectors that are in turn measured using large sampling matrices, thus imposing a huge computational and memory burden. In this chapter, we introduce Generalized Tensor Compressed Sensing (GTCS)--a unified framework for compressed sensing of higher-order tensors which preserves the intrinsic structure of tensorial data with reduced computational complexity at reconstruction. We demonstrate that GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method (GTCS-S) and a parallelizable method (GTCS-P), both capable of recovering a tensor based on noiseless and noisy observations. We then compare the performance of the proposed methods with Kronecker compressed sensing (KCS) and multi-way compressed sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well), is that the achieved compression ratios may be worse than those offered by KCS.
197 - Xiaochen Zhao , Wei Dai 2014
This paper studies the problem of power allocation in compressed sensing when different components in the unknown sparse signal have different probability to be non-zero. Given the prior information of the non-uniform sparsity and the total power bud get, we are interested in how to optimally allocate the power across the columns of a Gaussian random measurement matrix so that the mean squared reconstruction error is minimized. Based on the state evolution technique originated from the work by Donoho, Maleki, and Montanari, we revise the so called approximate message passing (AMP) algorithm for the reconstruction and quantify the MSE performance in the asymptotic regime. Then the closed form of the optimal power allocation is obtained. The results show that in the presence of measurement noise, uniform power allocation, which results in the commonly used Gaussian random matrix with i.i.d. entries, is not optimal for non-uniformly sparse signals. Empirical results are presented to demonstrate the performance gain.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا