No Arabic abstract
Is it possible to obliviously construct a set of hyperplanes H such that you can approximate a unit vector x when you are given the side on which the vector lies with respect to every h in H? In the sparse recovery literature, where x is approximately k-sparse, this problem is called one-bit compressed sensing and has received a fair amount of attention the last decade. In this paper we obtain the first scheme that achieves almost optimal measurements and sublinear decoding time for one-bit compressed sensing in the non-uniform case. For a large range of parameters, we improve the state of the art in both the number of measurements and the decoding time.
This paper concerns the problem of 1-bit compressed sensing, where the goal is to estimate a sparse signal from a few of its binary measurements. We study a non-convex sparsity-constrained program and present a novel and concise analysis that moves away from the widely used notion of Gaussian width. We show that with high probability a simple algorithm is guaranteed to produce an accurate approximation to the normalized signal of interest under the $ell_2$-metric. On top of that, we establish an ensemble of new results that address norm estimation, support recovery, and model misspecification. On the computational side, it is shown that the non-convex program can be solved via one-step hard thresholding which is dramatically efficient in terms of time complexity and memory footprint. On the statistical side, it is shown that our estimator enjoys a near-optimal error rate under standard conditions. The theoretical results are substantiated by numerical experiments.
This note studies the worst-case recovery error of low-rank and bisparse matrices as a function of the number of one-bit measurements used to acquire them. First, by way of the concept of consistency width, precise estimates are given on how fast the recovery error can in theory decay. Next, an idealized recovery method is proved to reach the fourth-root of the optimal decay rate for Gaussian sensing schemes. This idealized method being impractical, an implementable recovery algorithm is finally proposed in the context of factorized Gaussian sensing schemes. It is shown to provide a recovery error decaying as the sixth-root of the optimal rate.
We consider the problem of sparse signal reconstruction from noisy one-bit compressed measurements when the receiver has access to side-information (SI). We assume that compressed measurements are corrupted by additive white Gaussian noise before quantization and sign-flip error after quantization. A generalized approximate message passing-based method for signal reconstruction from noisy one-bit compressed measurements is proposed, which is then extended for the case where the receiver has access to a signal that aids signal reconstruction, i.e., side-information. Two different scenarios of side-information are considered-a) side-information consisting of support information only, and b) side information consisting of support and amplitude information. SI is either a noisy version of the signal or a noisy estimate of the support of the signal. We develop reconstruction algorithms from one-bit measurements using noisy SI available at the receiver. Laplacian distribution and Bernoulli distribution are used to model the two types of noises which, when applied to the signal and the support, yields the SI for the above two cases, respectively. The Expectation-Maximization algorithm is used to estimate the noise parameters using noisy one-bit compressed measurements and the SI. We show that one-bit compressed measurement-based signal reconstruction is quite sensitive to noise, and the reconstruction performance can be significantly improved by exploiting available side-information at the receiver.
We propose a new one-bit feedback scheme with scheduling decision based on the maximum expected weighted rate. We show the concavity of the $2$-user case and provide the optimal solution which achieves the maximum weighted rate of the users. For the general asymmetric M-user case, we provide a heuristic method to achieve the maximum expected weighted rate. We show that the sum rate of our proposed scheme is very close to the sum rate of the full channel state information case, which is the upper bound performance.
The 1-bit compressed sensing framework enables the recovery of a sparse vector x from the sign information of each entry of its linear transformation. Discarding the amplitude information can significantly reduce the amount of data, which is highly beneficial in practical applications. In this paper, we present a Bayesian approach to signal reconstruction for 1-bit compressed sensing, and analyze its typical performance using statistical mechanics. Utilizing the replica method, we show that the Bayesian approach enables better reconstruction than the L1-norm minimization approach, asymptotically saturating the performance obtained when the non-zero entries positions of the signal are known. We also test a message passing algorithm for signal reconstruction on the basis of belief propagation. The results of numerical experiments are consistent with those of the theoretical analysis.