Do you want to publish a course? Click here

Empirical Bayesian Inference using Joint Sparsity

53   0   0.0 ( 0 )
 Added by Theresa Scarnati
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

This paper develops a new empirical Bayesian inference algorithm for solving a linear inverse problem given multiple measurement vectors (MMV) of under-sampled and noisy observable data. Specifically, by exploiting the joint sparsity across the multiple measurements in the sparse domain of the underlying signal or image, we construct a new support informed sparsity promoting prior. Several applications can be modeled using this framework, and as a prototypical example we consider reconstructing an image from synthetic aperture radar (SAR) observations using nearby azimuth angles. Our numerical experiments demonstrate that using this new prior not only improves accuracy of the recovery, but also reduces the uncertainty in the posterior when compared to standard sparsity producing priors.

rate research

Read More

In this paper, we exploit the gradient flow structure of continuous-time formulations of Bayesian inference in terms of their numerical time-stepping. We focus on two particular examples, namely, the continuous-time ensemble Kalman-Bucy filter and a particle discretisation of the Fokker-Planck equation associated to Brownian dynamics. Both formulations can lead to stiff differential equations which require special numerical methods for their efficient numerical implementation. We compare discrete gradient methods to alternative semi-implicit and other iterative implementations of the underlying Bayesian inference problems.
103 - Sina Bittens 2017
In this paper a deterministic sparse Fourier transform algorithm is presented which breaks the quadratic-in-sparsity runtime bottleneck for a large class of periodic functions exhibiting structured frequency support. These functions include, e.g., the oft-considered set of block frequency sparse functions of the form $$f(x) = sum^{n}_{j=1} sum^{B-1}_{k=0} c_{omega_j + k} e^{i(omega_j + k)x},~~{ omega_1, dots, omega_n } subset left(-leftlceil frac{N}{2}rightrceil, leftlfloor frac{N}{2}rightrfloorright]capmathbb{Z}$$ as a simple subclass. Theoretical error bounds in combination with numerical experiments demonstrate that the newly proposed algorithms are both fast and robust to noise. In particular, they outperform standard sparse Fourier transforms in the rapid recovery of block frequency sparse functions of the type above.
The reconstruction of the unknown acoustic source is studied using the noisy multiple frequency data on a remote closed surface. Assume that the unknown source is coded in a spatial dependent piecewise constant function, whose support set is the target to be determined. In this setting, the unknown source can be formalized by a level set function. The function is explored with Bayesian level set approach. To reduce the infinite dimensional problem to finite dimension, we parameterize the level set function by the radial basis expansion. The well-posedness of the posterior distribution is proven. The posterior samples are generated according to the Metropolis-Hastings algorithm and the sample mean is used to approximate the unknown. Several shapes are tested to verify the effectiveness of the proposed algorithm. These numerical results show that the proposed algorithm is feasible and competitive with the Matern random field for the acoustic source problem.
In the context of a high-dimensional linear regression model, we propose the use of an empirical correlation-adaptive prior that makes use of information in the observed predictor variable matrix to adaptively address high collinearity, determining if parameters associated with correlated predictors should be shrunk together or kept apart. Under suitable conditions, we prove that this empirical Bayes posterior concentrates around the true sparse parameter at the optimal rate asymptotically. A simplified version of a shotgun stochastic search algorithm is employed to implement the variable selection procedure, and we show, via simulation experiments across different settings and a real-data application, the favorable performance of the proposed method compared to existing methods.
We consider best approximation problems in a nonlinear subset $mathcal{M}$ of a Banach space of functions $(mathcal{V},|bullet|)$. The norm is assumed to be a generalization of the $L^2$-norm for which only a weighted Monte Carlo estimate $|bullet|_n$ can be computed. The objective is to obtain an approximation $vinmathcal{M}$ of an unknown function $u in mathcal{V}$ by minimizing the empirical norm $|u-v|_n$. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the nonlinear least squares setting. Several model classes are examined where analytical statements can be made about the RIP and the results are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا