ﻻ يوجد ملخص باللغة العربية
We introduce the broad subclass of algebraic compressed sensing problems, where structured signals are modeled either explicitly or implicitly via polynomials. This includes, for instance, low-rank matrix and tensor recovery. We employ powerful techniques from algebraic geometry to study well-posedness of sufficiently general compressed sensing problems, including existence, local recoverability, global uniqueness, and local smoothness. Our main results are summarized in thirteen questions and answers in algebraic compressed sensing. Most of our answers concerning the minimum number of required measurements for existence, recoverability, and uniqueness of algebraic compressed sensing problems are optimal and depend only on the dimension of the model.
For certain sensing matrices, the Approximate Message Passing (AMP) algorithm efficiently reconstructs undersampled signals. However, in Magnetic Resonance Imaging (MRI), where Fourier coefficients of a natural image are sampled with variable density
We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining t
In the problem of adaptive compressed sensing, one wants to estimate an approximately $k$-sparse vector $xinmathbb{R}^n$ from $m$ linear measurements $A_1 x, A_2 x,ldots, A_m x$, where $A_i$ can be chosen based on the outcomes $A_1 x,ldots, A_{i-1} x
The goal of compressed sensing is to estimate a high dimensional vector from an underdetermined system of noisy linear equations. In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and emph{a