ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian signal reconstruction for 1-bit compressed sensing

156   0   0.0 ( 0 )
 نشر من قبل Yingying Xu
 تاريخ النشر 2014
والبحث باللغة English




اسأل ChatGPT حول البحث

The 1-bit compressed sensing framework enables the recovery of a sparse vector x from the sign information of each entry of its linear transformation. Discarding the amplitude information can significantly reduce the amount of data, which is highly beneficial in practical applications. In this paper, we present a Bayesian approach to signal reconstruction for 1-bit compressed sensing, and analyze its typical performance using statistical mechanics. Utilizing the replica method, we show that the Bayesian approach enables better reconstruction than the L1-norm minimization approach, asymptotically saturating the performance obtained when the non-zero entries positions of the signal are known. We also test a message passing algorithm for signal reconstruction on the basis of belief propagation. The results of numerical experiments are consistent with those of the theoretical analysis.



قيم البحث

اقرأ أيضاً

This work focuses on the reconstruction of sparse signals from their 1-bit measurements. The context is the one of 1-bit compressive sensing where the measurements amount to quantizing (dithered) random projections. Our main contribution shows that, in addition to the measurement process, we can additionally reconstruct the signal with a binarization of the sensing matrix. This binary representation of both the measurements and sensing matrix can dramatically simplify the hardware architecture on embedded systems, enabling cheaper and more power efficient alternatives. Within this framework, given a sensing matrix respecting the restricted isometry property (RIP), we prove that for any sparse signal the quantized projected back-projection (QPBP) algorithm achieves a reconstruction error decaying like O(m-1/2)when the number of measurements m increases. Simulations highlight the practicality of the developed scheme for different sensing scenarios, including random partial Fourier sensing.
68 - Vasileios Nakos 2017
Is it possible to obliviously construct a set of hyperplanes H such that you can approximate a unit vector x when you are given the side on which the vector lies with respect to every h in H? In the sparse recovery literature, where x is approximatel y k-sparse, this problem is called one-bit compressed sensing and has received a fair amount of attention the last decade. In this paper we obtain the first scheme that achieves almost optimal measurements and sublinear decoding time for one-bit compressed sensing in the non-uniform case. For a large range of parameters, we improve the state of the art in both the number of measurements and the decoding time.
We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining t he next one, and b) recursive decoding, where the signal estimate from the previous window is utilized in order to achieve faster convergence in an iterative optimization scheme applied to decode the new one. To remove estimation bias, a two-step estimation procedure is proposed comprising support set detection and signal amplitude estimation. Estimation accuracy is enhanced by a non-linear voting method and averaging estimates over multiple windows. We analyze the computational complexity and estimation error, and show that the normalized error variance asymptotically goes to zero for sublinear sparsity. Our simulation results show speed up of an order of magnitude over traditional CS, while obtaining significantly lower reconstruction error under mild conditions on the signal magnitudes and the noise level.
In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $xinmathbb{R}^n$ from $m$ linear measurements $A_1 x, A_2 x,ldots, A_m x$, where $A_i$ can be chosen based on the outcomes $A_1 x,ldots, A_{i-1} x $ of previous measurements. The goal is to output a vector $hat{x}$ for which $$|x-hat{x}|_p le C cdot min_{ktext{-sparse } x} |x-x|_q,$$ with probability at least $2/3$, where $C > 0$ is an approximation factor. Indyk, Price and Woodruff (FOCS11) gave an algorithm for $p=q=2$ for $C = 1+epsilon$ with $Oh((k/epsilon) loglog (n/k))$ measurements and $Oh(log^*(k) loglog (n))$ rounds of adaptivity. We first improve their bounds, obtaining a scheme with $Oh(k cdot loglog (n/k) +(k/epsilon) cdot loglog(1/epsilon))$ measurements and $Oh(log^*(k) loglog (n))$ rounds, as well as a scheme with $Oh((k/epsilon) cdot loglog (nlog (n/k)))$ measurements and an optimal $Oh(loglog (n))$ rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for $(p,p)$ for every $0 < p < 2$. We show that the improvement from $O(k log(n/k))$ measurements to $O(k log log (n/k))$ measurements in the adaptive setting can persist with a better $epsilon$-dependence for other values of $p$ and $q$. For example, when $(p,q) = (1,1)$, we obtain $O(frac{k}{sqrt{epsilon}} cdot log log n log^3 (frac{1}{epsilon}))$ measurements.
Signal processing techniques have been developed that use different strategies to bypass the Nyquist sampling theorem in order to recover more information than a traditional discrete Fourier transform. Here we examine three such methods: filter diago nalization, compressed sensing, and super-resolution. We apply them to a broad range of signal forms commonly found in science and engineering in order to discover when and how each method can be used most profitably. We find that filter diagonalization provides the best results for Lorentzian signals, while compressed sensing and super-resolution perform better for arbitrary signals.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا