ﻻ يوجد ملخص باللغة العربية
Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals $cal Q$, ${cal Q}subset{R}^n$, and a corresponding generative function $g:{cal U}^krightarrow {R}^n$, ${cal U}subset {R}$, such that $sup_{{bf x}in {cal Q}}min_{{bf u}in{cal U}^k}{1over sqrt{n}}|g({bf u})-{bf x}|leq delta$. A recovery method based on $g$ seeks $g({bf u})$ with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as $k$ and $n$ grow without bound and $delta$ converges to zero, if the number of measurements ($m$) is larger than the input dimension of the generative model ($k$), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions $f:{R}^nto{cal U}^k$ and $g:{cal U}^kto{R}^n$, respectively. We theoretically prove that, roughly, given $m>40klog{1over delta}$ measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented.
Partial differential equations are central to describing many physical phenomena. In many applications these phenomena are observed through a sensor network, with the aim of inferring their underlying properties. Leveraging from certain results in sa
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which lea
It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence. This motivates the design of a new class of robust generative auto-encoders (AE) referred to as Fisher auto
In this work we introduce a novel stochastic algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem, where the observation is assumed to be contaminated by additive white Gaussian noise. Our solution
Recently, it has been shown that incoherence is an unrealistic assumption for compressed sensing when applied to many inverse problems. Instead, the key property that permits efficient recovery in such problems is so-called local incoherence. Similar