Do you want to publish a course? Click here

Duality between Approximate Bayesian Methods and Prior Robustness

63   0   0.0 ( 0 )
 Added by Chaitanya Joshi Dr.
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper we show that there is a link between approximate Bayesian methods and prior robustness. We show that what is typically recognized as an approximation to the likelihood, either due to the simulated data as in the Approximate Bayesian Computation (ABC) methods or due to the functional approximation to the likelihood, can instead also be viewed upon as an implicit exercise in prior robustness. We first define two new classes of priors for the cases where the sufficient statistics is available, establish their mathematical properties and show, for a simple illustrative example, that these classes of priors can also be used to obtain the posterior distribution that would be obtained by implementing ABC. We then generalize and define two further classes of priors that are applicable in very general scenarios; one where the sufficient statistics is not available and another where the likelihood is approximated using a functional approximation. We then discuss the interpretation and elicitation aspects of the classes proposed here as well as their potential applications and possible computational benefits. These classes establish the duality between approximate Bayesian inference and prior robustness for a wide category of Bayesian inference methods.

rate research

Read More

State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplaces method, an asymptotic series expansion, to approximate the states conditional mean and variance, together with a Gaussian conditional distribution. This {em Laplace-Gaussian filter} (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.
In (exploratory) factor analysis, the loading matrix is identified only up to orthogonal rotation. For identifiability, one thus often takes the loading matrix to be lower triangular with positive diagonal entries. In Bayesian inference, a standard practice is then to specify a prior under which the loadings are independent, the off-diagonal loadings are normally distributed, and the diagonal loadings follow a truncated normal distribution. This prior specification, however, depends in an important way on how the variables and associated rows of the loading matrix are ordered. We show how a minor modification of the approach allows one to compute with the identifiable lower triangular loading matrix but maintain invariance properties under reordering of the variables.
This report is a collection of comments on the Read Paper of Fearnhead and Prangle (2011), to appear in the Journal of the Royal Statistical Society Series B, along with a reply from the authors.
127 - Ding Xiang , Galin L. Jones 2017
We consider penalized regression models under a unified framework where the particular method is determined by the form of the penalty term. We propose a fully Bayesian approach that incorporates both sparse and dense settings and show how to use a type of model averaging approach to eliminate the nuisance penalty parameters and perform inference through the marginal posterior distribution of the regression coefficients. We establish tail robustness of the resulting estimator as well as conditional and marginal posterior consistency. We develop an efficient component-wise Markov chain Monte Carlo algorithm for sampling. Numerical results show that the method tends to select the optimal penalty and performs well in both variable selection and prediction and is comparable to, and often better than alternative methods. Both simulated and real data examples are provided.
In this article, we consider a non-parametric Bayesian approach to multivariate quantile regression. The collection of related conditional distributions of a response vector Y given a univariate covariate X is modeled using a Dependent Dirichlet Process (DDP) prior. The DDP is used to introduce dependence across x. As the realizations from a Dirichlet process prior are almost surely discrete, we need to convolve it with a kernel. To model the error distribution as flexibly as possible, we use a countable mixture of multidimensional normal distributions as our kernel. For posterior computations, we use a truncated stick-breaking representation of the DDP. This approximation enables us to deal with only a finitely number of parameters. We use a Block Gibbs sampler for estimating the model parameters. We illustrate our method with simulation studies and real data applications. Finally, we provide a theoretical justification for the proposed method through posterior consistency. Our proposed procedure is new even when the response is univariate.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا