No Arabic abstract
Xampling generalizes compressed sensing (CS) to reduced-rate sampling of analog signals. A unified framework is introduced for low rate sampling and processing of signals lying in a union of subspaces. Xampling consists of two main blocks: Analog compression that narrows down the input bandwidth prior to sampling with commercial devices followed by a nonlinear algorithm that detects the input subspace prior to conventional signal processing. A variety of analog CS applications are reviewed within the unified Xampling framework including a general filter-bank scheme for sparse shift-invariant spaces, periodic nonuniform sampling and modulated wideband conversion for multiband communications with unknown carrier frequencies, acquisition techniques for finite rate of innovation signals with applications to medical and radar imaging, and random demodulation of sparse harmonic tones. A hardware-oriented viewpoint is advocated throughout, addressing practical constraints and exemplifying hardware realizations where relevant. It will appear as a chapter in a book on Compressed Sensing: Theory and Applications edited by Yonina Eldar and Gitta Kutyniok.
Motivated by applications in unsourced random access, this paper develops a novel scheme for the problem of compressed sensing of binary signals. In this problem, the goal is to design a sensing matrix $A$ and a recovery algorithm, such that the sparse binary vector $mathbf{x}$ can be recovered reliably from the measurements $mathbf{y}=Amathbf{x}+sigmamathbf{z}$, where $mathbf{z}$ is additive white Gaussian noise. We propose to design $A$ as a parity check matrix of a low-density parity-check code (LDPC), and to recover $mathbf{x}$ from the measurements $mathbf{y}$ using a Markov chain Monte Carlo algorithm, which runs relatively fast due to the sparse structure of $A$. The performance of our scheme is comparable to state-of-the-art schemes, which use dense sensing matrices, while enjoying the advantages of using a sparse sensing matrix.
Compressed sensing is a paradigm within signal processing that provides the means for recovering structured signals from linear measurements in a highly efficient manner. Originally devised for the recovery of sparse signals, it has become clear that a similar methodology would also carry over to a wealth of other classes of structured signals. In this work, we provide an overview over the theory of compressed sensing for a particularly rich family of such signals, namely those of hierarchically structured signals. Examples of such signals are constituted by blocked vectors, with only few non-vanishing sparse blocks. We present recovery algorithms based on efficient hierarchical hard-thresholding. The algorithms are guaranteed to stable and robustly converge to the correct solution provide the measurement map acts isometrically restricted to the signal class. We then provide a series of results establishing that the required condition for large classes of measurement ensembles. Building upon this machinery, we sketch practical applications of this framework in machine-type and quantum communication.
We provide a scheme for exploring the reconstruction limit of compressed sensing by minimizing the general cost function under the random measurement constraints for generic correlated signal sources. Our scheme is based on the statistical mechanical replica method for dealing with random systems. As a simple but non-trivial example, we apply the scheme to a sparse autoregressive model, where the first differences in the input signals of the correlated time series are sparse, and evaluate the critical compression rate for a perfect reconstruction. The results are in good agreement with a numerical experiment for a signal reconstruction.
This paper studies the problem of power allocation in compressed sensing when different components in the unknown sparse signal have different probability to be non-zero. Given the prior information of the non-uniform sparsity and the total power budget, we are interested in how to optimally allocate the power across the columns of a Gaussian random measurement matrix so that the mean squared reconstruction error is minimized. Based on the state evolution technique originated from the work by Donoho, Maleki, and Montanari, we revise the so called approximate message passing (AMP) algorithm for the reconstruction and quantify the MSE performance in the asymptotic regime. Then the closed form of the optimal power allocation is obtained. The results show that in the presence of measurement noise, uniform power allocation, which results in the commonly used Gaussian random matrix with i.i.d. entries, is not optimal for non-uniformly sparse signals. Empirical results are presented to demonstrate the performance gain.
Compressed sensing (CS) shows that a signal having a sparse or compressible representation can be recovered from a small set of linear measurements. In classical CS theory, the sampling matrix and representation matrix are assumed to be known exactly in advance. However, uncertainties exist due to sampling distortion, finite grids of the parameter space of dictionary, etc. In this paper, we take a generalized sparse signal model, which simultaneously considers the sampling and representation matrix uncertainties. Based on the new signal model, a new optimization model for robust sparse signal reconstruction is proposed. This optimization model can be deduced with stochastic robust approximation analysis. Both convex relaxation and greedy algorithms are used to solve the optimization problem. For the convex relaxation method, a sufficient condition for recovery by convex relaxation is given; For the greedy algorithm, it is realized by the introduction of a pre-processing of the sensing matrix and the measurements. In numerical experiments, both simulated data and real-life ECG data based results show that the proposed method has a better performance than the current methods.