ترغب بنشر مسار تعليمي؟ اضغط هنا

Algebraic compressed sensing

67   0   0.0 ( 0 )
 نشر من قبل Nick Vannieuwenhoven
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce the broad subclass of algebraic compressed sensing problems, where structured signals are modeled either explicitly or implicitly via polynomials. This includes, for instance, low-rank matrix and tensor recovery. We employ powerful techniques from algebraic geometry to study well-posedness of sufficiently general compressed sensing problems, including existence, local recoverability, global uniqueness, and local smoothness. Our main results are summarized in thirteen questions and answers in algebraic compressed sensing. Most of our answers concerning the minimum number of required measurements for existence, recoverability, and uniqueness of algebraic compressed sensing problems are optimal and depend only on the dimension of the model.



قيم البحث

اقرأ أيضاً

For certain sensing matrices, the Approximate Message Passing (AMP) algorithm efficiently reconstructs undersampled signals. However, in Magnetic Resonance Imaging (MRI), where Fourier coefficients of a natural image are sampled with variable density , AMP encounters convergence problems. In response we present an algorithm based on Orthogonal AMP constructed specifically for variable density partial Fourier sensing matrices. For the first time in this setting a state evolution has been observed. A practical advantage of state evolution is that Steins Unbiased Risk Estimate (SURE) can be effectively implemented, yielding an algorithm with no free parameters. We empirically evaluate the effectiveness of the parameter-free algorithm on simulated data and find that it converges over 5x faster and to a lower mean-squared error solution than Fast Iterative Shrinkage-Thresholding (FISTA).
We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining t he next one, and b) recursive decoding, where the signal estimate from the previous window is utilized in order to achieve faster convergence in an iterative optimization scheme applied to decode the new one. To remove estimation bias, a two-step estimation procedure is proposed comprising support set detection and signal amplitude estimation. Estimation accuracy is enhanced by a non-linear voting method and averaging estimates over multiple windows. We analyze the computational complexity and estimation error, and show that the normalized error variance asymptotically goes to zero for sublinear sparsity. Our simulation results show speed up of an order of magnitude over traditional CS, while obtaining significantly lower reconstruction error under mild conditions on the signal magnitudes and the noise level.
In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $xinmathbb{R}^n$ from $m$ linear measurements $A_1 x, A_2 x,ldots, A_m x$, where $A_i$ can be chosen based on the outcomes $A_1 x,ldots, A_{i-1} x $ of previous measurements. The goal is to output a vector $hat{x}$ for which $$|x-hat{x}|_p le C cdot min_{ktext{-sparse } x} |x-x|_q,$$ with probability at least $2/3$, where $C > 0$ is an approximation factor. Indyk, Price and Woodruff (FOCS11) gave an algorithm for $p=q=2$ for $C = 1+epsilon$ with $Oh((k/epsilon) loglog (n/k))$ measurements and $Oh(log^*(k) loglog (n))$ rounds of adaptivity. We first improve their bounds, obtaining a scheme with $Oh(k cdot loglog (n/k) +(k/epsilon) cdot loglog(1/epsilon))$ measurements and $Oh(log^*(k) loglog (n))$ rounds, as well as a scheme with $Oh((k/epsilon) cdot loglog (nlog (n/k)))$ measurements and an optimal $Oh(loglog (n))$ rounds. We then provide novel adaptive compressed sensing schemes with improved bounds for $(p,p)$ for every $0 < p < 2$. We show that the improvement from $O(k log(n/k))$ measurements to $O(k log log (n/k))$ measurements in the adaptive setting can persist with a better $epsilon$-dependence for other values of $p$ and $q$. For example, when $(p,q) = (1,1)$, we obtain $O(frac{k}{sqrt{epsilon}} cdot log log n log^3 (frac{1}{epsilon}))$ measurements.
The goal of compressed sensing is to estimate a high dimensional vector from an underdetermined system of noisy linear equations. In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector is represented by a deep generative model $G: mathbb{R}^k rightarrow mathbb{R}^n$. Classical recovery approaches such as empirical risk minimization (ERM) are guaranteed to succeed when the measurement matrix is sub-Gaussian. However, when the measurement matrix and measurements are heavy-tailed or have outliers, recovery may fail dramatically. In this paper we propose an algorithm inspired by the Median-of-Means (MOM). Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers. Theoretically, our results show our novel MOM-based algorithm enjoys the same sample complexity guarantees as ERM under sub-Gaussian assumptions. Our experiments validate both aspects of our claims: other algorithms are indeed fragile and fail under heavy-tailed and/or corrupted data, while our approach exhibits the predicted robustness.
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and emph{a ny} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا