ﻻ يوجد ملخص باللغة العربية
Quantized compressive sensing (QCS) deals with the problem of representing compressive signal measurements with finite precision representation, i.e., a mandatory process in any practical sensor design. To characterize the signal reconstruction quality in this framework, most of the existing theoretical analyses lie heavily on the quantization of sub-Gaussian random projections (e.g., Gaussian or Bernoulli). We show here that a simple uniform scalar quantizer is compatible with a large class of random sensing matrices known to respect, with high probability, the restricted isometry property (RIP). Critically, this compatibility arises from the addition of a uniform random vector, or dithering, to the linear signal observations before quantization. In this setting, we prove the existence of (at least) one signal reconstruction method, i.e., the projected back projection (PBP), whose reconstruction error decays when the number of quantized measurements increases. This holds with high probability in the estimation of sparse signals and low-rank matrices. We validate numerically the predicted error decay as the number of measurements increases.
We consider the total variation (TV) minimization problem used for compressive sensing and solve it using the generalized alternating projection (GAP) algorithm. Extensive results demonstrate the high performance of proposed algorithm on compressive
We address the connection between the multiple-description (MD) problem and Delta-Sigma quantization. The inherent redundancy due to oversampling in Delta-Sigma quantization, and the simple linear-additive noise model resulting from dithered lattice
Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble
In most compressive sensing problems l1 norm is used during the signal reconstruction process. In this article the use of entropy functional is proposed to approximate the l1 norm. A modified version of the entropy functional is continuous, different
We consider the question of estimating a real low-complexity signal (such as a sparse vector or a low-rank matrix) from the phase of complex random measurements. We show that in this phase-only compressive sensing (PO-CS) scenario, we can perfectly r