Do you want to publish a course? Click here

Exact Bayesian Analysis of Mixtures

137   0   0.0 ( 0 )
 Added by Christian P. Robert
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

In this paper, we show how a complete and exact Bayesian analysis of a parametric mixture model is possible in some cases when components of the mixture are taken from exponential families and when conjugate priors are used. This restricted set-up allows us to show the relevance of the Bayesian approach as well as to exhibit the limitations of a complete analysis, namely that it is impossible to conduct this analysis when the sample size is too large, when the data are not from an exponential family, or when priors that are more complex than conjugate priors are used.



rate research

Read More

This survey covers state-of-the-art Bayesian techniques for the estimation of mixtures. It complements the earlier Marin, Mengersen and Robert (2005) by studying new types of distributions, the multinomial, latent class and t distributions. It also exhibits closed form solutions for Bayesian inference in some discrete setups. Lastly, it sheds a new light on the computation of Bayes factors via the approximation of Chib (1995).
Approximate Bayesian computation (ABC) or likelihood-free inference algorithms are used to find approximations to posterior distributions without making explicit use of the likelihood function, depending instead on simulation of sample data sets from the model. In this paper we show that under the assumption of the existence of a uniform additive model error term, ABC algorithms give exact results when sufficient summaries are used. This interpretation allows the approximation made in many previous application papers to be understood, and should guide the choice of metric and tolerance in future work. ABC algorithms can be generalized by replacing the 0-1 cut-off with an acceptance probability that varies with the distance of the simulated data from the observed data. The acceptance density gives the distribution of the error term, enabling the uniform error usually used to be replaced by a general distribution. This generalization can also be applied to approximate Markov chain Monte Carlo algorithms. In light of this work, ABC algorithms can be seen as calibration techniques for implicit stochastic models, inferring parameter values in light of the computer model, data, prior beliefs about the parameter values, and any measurement or model errors.
179 - Daniel Yekutieli 2011
We address the problem of providing inference from a Bayesian perspective for parameters selected after viewing the data. We present a Bayesian framework for providing inference for selected parameters, based on the observation that providing Bayesian inference for selected parameters is a truncated data problem. We show that if the prior for the parameter is non-informative, or if the parameter is a fixed unknown constant, then it is necessary to adjust the Bayesian inference for selection. Our second contribution is the introduction of Bayesian False Discovery Rate controlling methodology,which generalizes existing Bayesian FDR methods that are only defined in the two-group mixture model.We illustrate our results by applying them to simulated data and data froma microarray experiment.
We develop a scalable multi-step Monte Carlo algorithm for inference under a large class of nonparametric Bayesian models for clustering and classification. Each step is embarrassingly parallel and can be implemented using the same Markov chain Monte Carlo sampler. The simplicity and generality of our approach makes inference for a wide range of Bayesian nonparametric mixture models applicable to large datasets. Specifically, we apply the approach to inference under a product partition model with regression on covariates. We show results for inference with two motivating data sets: a large set of electronic health records (EHR) and a bank telemarketing dataset. We find interesting clusters and favorable classification performance relative to other widely used competing classifiers.
Approximate Bayesian computation (ABC) is computationally intensive for complex model simulators. To exploit expensive simulations, data-resampling via bootstrapping can be employed to obtain many artificial datasets at little cost. However, when using this approach within ABC, the posterior variance is inflated, thus resulting in biased posterior inference. Here we use stratified Monte Carlo to considerably reduce the bias induced by data resampling. We also show empirically that it is possible to obtain reliable inference using a larger than usual ABC threshold. Finally, we show that with stratified Monte Carlo we obtain a less variable ABC likelihood. Ultimately we show how our approach improves the computational efficiency of the ABC samplers. We construct several ABC samplers employing our methodology, such as rejection and importance ABC samplers, and ABC-MCMC samplers. We consider simulation studies for static (Gaussian, g-and-k distribution, Ising model, astronomical model) and dynamic models (Lotka-Volterra). We compare against state-of-art sequential Monte Carlo ABC samplers, synthetic likelihoods, and likelihood-free Bayesian optimization. For a computationally expensive Lotka-Volterra case study, we found that our strategy leads to a more than 10-fold computational saving, compared to a sampler that does not use our novel approach.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا