Do you want to publish a course? Click here

Instance-Optimal Compressed Sensing via Posterior Sampling

178   0   0.0 ( 0 )
 Added by Ajil Jalal
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and emph{any} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.



rate research

Read More

We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining the next one, and b) recursive decoding, where the signal estimate from the previous window is utilized in order to achieve faster convergence in an iterative optimization scheme applied to decode the new one. To remove estimation bias, a two-step estimation procedure is proposed comprising support set detection and signal amplitude estimation. Estimation accuracy is enhanced by a non-linear voting method and averaging estimates over multiple windows. We analyze the computational complexity and estimation error, and show that the normalized error variance asymptotically goes to zero for sublinear sparsity. Our simulation results show speed up of an order of magnitude over traditional CS, while obtaining significantly lower reconstruction error under mild conditions on the signal magnitudes and the noise level.
In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $xinmathbb{R}^n$ from $m$ linear measurements $A_1 x, A_2 x,ldots, A_m x$, where $A_i$ can be chosen based on the outcomes $A_1 x,ldots, A_{i-1} x$ of previous measurements. The goal is to output a vector $hat{x}$ for which $$|x-hat{x}|_p le C cdot min_{ktext{-sparse } x} |x-x|_q,$$ with probability at least $2/3$, where $C > 0$ is an approximation factor. Indyk, Price and Woodruff (FOCS11) gave an algorithm for $p=q=2$ for $C = 1+epsilon$ with $Oh((k/epsilon) loglog (n/k))$ measurements and $Oh(log^*(k) loglog (n))$ rounds of adaptivity. We first improve their bounds, obtaining a scheme with $Oh(k cdot loglog (n/k) +(k/epsilon) cdot loglog(1/epsilon))$ measurements and $Oh(log^*(k) loglog (n))$ rounds, as well as a scheme with $Oh((k/epsilon) cdot loglog (nlog (n/k)))$ measurements and an optimal $Oh(loglog (n))$ rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for $(p,p)$ for every $0 < p < 2$. We show that the improvement from $O(k log(n/k))$ measurements to $O(k log log (n/k))$ measurements in the adaptive setting can persist with a better $epsilon$-dependence for other values of $p$ and $q$. For example, when $(p,q) = (1,1)$, we obtain $O(frac{k}{sqrt{epsilon}} cdot log log n log^3 (frac{1}{epsilon}))$ measurements.
The 1-bit compressed sensing framework enables the recovery of a sparse vector x from the sign information of each entry of its linear transformation. Discarding the amplitude information can significantly reduce the amount of data, which is highly beneficial in practical applications. In this paper, we present a Bayesian approach to signal reconstruction for 1-bit compressed sensing, and analyze its typical performance using statistical mechanics. Utilizing the replica method, we show that the Bayesian approach enables better reconstruction than the L1-norm minimization approach, asymptotically saturating the performance obtained when the non-zero entries positions of the signal are known. We also test a message passing algorithm for signal reconstruction on the basis of belief propagation. The results of numerical experiments are consistent with those of the theoretical analysis.
Long-range correlated errors can severely impact the performance of NISQ (noisy intermediate-scale quantum) devices, and fault-tolerant quantum computation. Characterizing these errors is important for improving the performance of these devices, via calibration and error correction, and to ensure correct interpretation of the results. We propose a compressed sensing method for detecting two-qubit correlated dephasing errors, assuming only that the correlations are sparse (i.e., at most s pairs of qubits have correlated errors, where s << n(n-1)/2, and n is the total number of qubits). In particular, our method can detect long-range correlations between any two qubits in the system (i.e., the correlations are not restricted to be geometrically local). Our method is highly scalable: it requires as few as m = O(s log n) measurement settings, and efficient classical postprocessing based on convex optimization. In addition, when m = O(s log^4(n)), our method is highly robust to noise, and has sample complexity O(max(n,s)^2 log^4(n)), which can be compared to conventional methods that have sample complexity O(n^3). Thus, our method is advantageous when the correlations are sufficiently sparse, that is, when s < O(n^(3/2) / log^2(n)). Our method also performs well in numerical simulations on small system sizes, and has some resistance to state-preparation-and-measurement (SPAM) errors. The key ingredient in our method is a new type of compressed sensing measurement, which works by preparing entangled Greenberger-Horne-Zeilinger states (GHZ states) on random subsets of qubits, and measuring their decay rates with high precision.
155 - Sungjin Ahn 2012
In this paper we address the following question: Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا