ترغب بنشر مسار تعليمي؟ اضغط هنا

Solving inverse problems via auto-encoders

109   0   0.0 ( 0 )
 نشر من قبل Shirin Jalali
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals $cal Q$, ${cal Q}subset{R}^n$, and a corresponding generative function $g:{cal U}^krightarrow {R}^n$, ${cal U}subset {R}$, such that $sup_{{bf x}in {cal Q}}min_{{bf u}in{cal U}^k}{1over sqrt{n}}|g({bf u})-{bf x}|leq delta$. A recovery method based on $g$ seeks $g({bf u})$ with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as $k$ and $n$ grow without bound and $delta$ converges to zero, if the number of measurements ($m$) is larger than the input dimension of the generative model ($k$), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions $f:{R}^nto{cal U}^k$ and $g:{cal U}^kto{R}^n$, respectively. We theoretically prove that, roughly, given $m>40klog{1over delta}$ measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented.

قيم البحث

اقرأ أيضاً

Partial differential equations are central to describing many physical phenomena. In many applications these phenomena are observed through a sensor network, with the aim of inferring their underlying properties. Leveraging from certain results in sa mpling and approximation theory, we present a new framework for solving a class of inverse source problems for physical fields governed by linear partial differential equations. Specifically, we demonstrate that the unknown field sources can be recovered from a sequence of, so called, generalised measurements by using multidimensional frequency estimation techniques. Next we show that---for physics-driven fields---this sequence of generalised measurements can be estimated by computing a linear weighted-sum of the sensor measurements; whereby the exact weights (of the sums) correspond to those that reproduce multidimensional exponentials, when used to linearly combine translates of a particular prototype function related to the Greens function of our underlying field. Explicit formulae are then derived for the sequence of weights, that map sensor samples to the exact sequence of generalised measurements when the Greens function satisfies the generalised Strang-Fix condition. Otherwise, the same mapping yields a close approximation of the generalised measurements. Based on this new framework we develop practical, noise robust, sensor network strategies for solving the inverse source problem, and then present numerical simulation results to verify their performance.
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which lea ds to a different regularizer than the one used by the Variational Auto-Encoder (VAE). This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.
It has been conjectured that the Fisher divergence is more robust to model uncertainty than the conventional Kullback-Leibler (KL) divergence. This motivates the design of a new class of robust generative auto-encoders (AE) referred to as Fisher auto -encoders. Our approach is to design Fisher AEs by minimizing the Fisher divergence between the intractable joint distribution of observed data and latent variables, with that of the postulated/modeled joint distribution. In contrast to KL-based variational AEs (VAEs), the Fisher AE can exactly quantify the distance between the true and the model-based posterior distributions. Qualitative and quantitative results are provided on both MNIST and celebA datasets demonstrating the competitive performance of Fisher AEs in terms of robustness compared to other AEs such as VAEs and Wasserstein AEs.
In this work we introduce a novel stochastic algorithm dubbed SNIPS, which draws samples from the posterior distribution of any linear inverse problem, where the observation is assumed to be contaminated by additive white Gaussian noise. Our solution incorporates ideas from Langevin dynamics and Newtons method, and exploits a pre-trained minimum mean squared error (MMSE) Gaussian denoiser. The proposed approach relies on an intricate derivation of the posterior score function that includes a singular value decomposition (SVD) of the degradation operator, in order to obtain a tractable iterative algorithm for the desired sampling. Due to its stochasticity, the algorithm can produce multiple high perceptual quality samples for the same noisy observation. We demonstrate the abilities of the proposed paradigm for image deblurring, super-resolution, and compressive sensing. We show that the samples produced are sharp, detailed and consistent with the given measurements, and their diversity exposes the inherent uncertainty in the inverse problem being solved.
Recently, it has been shown that incoherence is an unrealistic assumption for compressed sensing when applied to many inverse problems. Instead, the key property that permits efficient recovery in such problems is so-called local incoherence. Similar ly, the standard notion of sparsity is also inadequate for many real world problems. In particular, in many applications, the optimal sampling strategy depends on asymptotic incoherence and the signal sparsity structure. The purpose of this paper is to study asymptotic incoherence and its implications towards the design of optimal sampling strategies and efficient sparsity bases. It is determined how fast asymptotic incoherence can decay in general for isometries. Furthermore it is shown that Fourier sampling and wavelet sparsity, whilst globally coherent, yield optimal asymptotic incoherence as a power law up to a constant factor. Sharp bounds on the asymptotic incoherence for Fourier sampling with polynomial bases are also provided. A numerical experiment is also presented to demonstrate the role of asymptotic incoherence in finding good subsampling strategies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا