No Arabic abstract
This work focuses on the reconstruction of sparse signals from their 1-bit measurements. The context is the one of 1-bit compressive sensing where the measurements amount to quantizing (dithered) random projections. Our main contribution shows that, in addition to the measurement process, we can additionally reconstruct the signal with a binarization of the sensing matrix. This binary representation of both the measurements and sensing matrix can dramatically simplify the hardware architecture on embedded systems, enabling cheaper and more power efficient alternatives. Within this framework, given a sensing matrix respecting the restricted isometry property (RIP), we prove that for any sparse signal the quantized projected back-projection (QPBP) algorithm achieves a reconstruction error decaying like O(m-1/2)when the number of measurements m increases. Simulations highlight the practicality of the developed scheme for different sensing scenarios, including random partial Fourier sensing.
The 1-bit compressed sensing framework enables the recovery of a sparse vector x from the sign information of each entry of its linear transformation. Discarding the amplitude information can significantly reduce the amount of data, which is highly beneficial in practical applications. In this paper, we present a Bayesian approach to signal reconstruction for 1-bit compressed sensing, and analyze its typical performance using statistical mechanics. Utilizing the replica method, we show that the Bayesian approach enables better reconstruction than the L1-norm minimization approach, asymptotically saturating the performance obtained when the non-zero entries positions of the signal are known. We also test a message passing algorithm for signal reconstruction on the basis of belief propagation. The results of numerical experiments are consistent with those of the theoretical analysis.
Is it possible to obliviously construct a set of hyperplanes H such that you can approximate a unit vector x when you are given the side on which the vector lies with respect to every h in H? In the sparse recovery literature, where x is approximately k-sparse, this problem is called one-bit compressed sensing and has received a fair amount of attention the last decade. In this paper we obtain the first scheme that achieves almost optimal measurements and sublinear decoding time for one-bit compressed sensing in the non-uniform case. For a large range of parameters, we improve the state of the art in both the number of measurements and the decoding time.
One-bit radar, performing signal sampling and quantization by a one-bit ADC, is a promising technology for many civilian applications due to its low-cost and low-power consumptions. In this paper, problems encountered by one-bit LFMCW radar are studied and a two-stage target detection method termed as the dimension-reduced generalized approximate message passing (DR-GAMP) approach is proposed. Firstly, the spectrum of one-bit quantized signals in a scenario with multiple targets is analyzed. It is indicated that high-order harmonics may result in false alarms (FAs) and cannot be neglected. Secondly, based on the spectrum analysis, the DR-GAMP approach is proposed to carry out target detection. Specifically, linear preprocessing methods and target predetection are firstly adopted to perform the dimension reduction, and then, the GAMP algorithm is utilized to suppress high-order harmonics and recover true targets. Finally, numerical simulations are conducted to evaluate the performance of one-bit LFMCW radar under typical parameters. It is shown that compared to the conventional radar applying linear processing methods, one-bit LFMCW radar has about $1.3$ dB performance gain when the input signal-to-noise ratios (SNRs) of targets are low. In the presence of a strong target, it has about $1.0$ dB performance loss.
1-bit compressive sensing aims to recover sparse signals from quantized 1-bit measurements. Designing efficient approaches that could handle noisy 1-bit measurements is important in a variety of applications. In this paper we use the approximate message passing (AMP) to achieve this goal due to its high computational efficiency and state-of-the-art performance. In AMP the signal of interest is assumed to follow some prior distribution, and its posterior distribution can be computed and used to recover the signal. In practice, the parameters of the prior distributions are often unknown and need to be estimated. Previous works tried to find the parameters that maximize either the measurement likelihood or the Bethe free entropy, which becomes increasingly difficult to solve in the case of complicated probability models. Here we propose to treat the parameters as unknown variables and compute their posteriors via AMP as well, so that the parameters and the signal can be recovered jointly. This leads to a much simpler way to perform parameter estimation compared to previous methods and enables us to work with noisy 1-bit measurements. We further extend the proposed approach to the general quantization noise model that outputs multi-bit measurements. Experimental results show that the proposed approach generally perform much better than the other state-of-the-art methods in the zero-noise and moderate-noise regimes, and outperforms them in most of the cases in the high-noise regime.
We consider the problem of sparse signal reconstruction from noisy one-bit compressed measurements when the receiver has access to side-information (SI). We assume that compressed measurements are corrupted by additive white Gaussian noise before quantization and sign-flip error after quantization. A generalized approximate message passing-based method for signal reconstruction from noisy one-bit compressed measurements is proposed, which is then extended for the case where the receiver has access to a signal that aids signal reconstruction, i.e., side-information. Two different scenarios of side-information are considered-a) side-information consisting of support information only, and b) side information consisting of support and amplitude information. SI is either a noisy version of the signal or a noisy estimate of the support of the signal. We develop reconstruction algorithms from one-bit measurements using noisy SI available at the receiver. Laplacian distribution and Bernoulli distribution are used to model the two types of noises which, when applied to the signal and the support, yields the SI for the above two cases, respectively. The Expectation-Maximization algorithm is used to estimate the noise parameters using noisy one-bit compressed measurements and the SI. We show that one-bit compressed measurement-based signal reconstruction is quite sensitive to noise, and the reconstruction performance can be significantly improved by exploiting available side-information at the receiver.