ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian Inference on Mixtures of Distributions

244   0   0.0 ( 0 )
 نشر من قبل Jean-Michel Marin
 تاريخ النشر 2008
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This survey covers state-of-the-art Bayesian techniques for the estimation of mixtures. It complements the earlier Marin, Mengersen and Robert (2005) by studying new types of distributions, the multinomial, latent class and t distributions. It also exhibits closed form solutions for Bayesian inference in some discrete setups. Lastly, it sheds a new light on the computation of Bayes factors via the approximation of Chib (1995).



قيم البحث

اقرأ أيضاً

In this paper, we show how a complete and exact Bayesian analysis of a parametric mixture model is possible in some cases when components of the mixture are taken from exponential families and when conjugate priors are used. This restricted set-up al lows us to show the relevance of the Bayesian approach as well as to exhibit the limitations of a complete analysis, namely that it is impossible to conduct this analysis when the sample size is too large, when the data are not from an exponential family, or when priors that are more complex than conjugate priors are used.
182 - Daniel Yekutieli 2011
We address the problem of providing inference from a Bayesian perspective for parameters selected after viewing the data. We present a Bayesian framework for providing inference for selected parameters, based on the observation that providing Bayesia n inference for selected parameters is a truncated data problem. We show that if the prior for the parameter is non-informative, or if the parameter is a fixed unknown constant, then it is necessary to adjust the Bayesian inference for selection. Our second contribution is the introduction of Bayesian False Discovery Rate controlling methodology,which generalizes existing Bayesian FDR methods that are only defined in the two-group mixture model.We illustrate our results by applying them to simulated data and data froma microarray experiment.
Bayesian inference for nonlinear diffusions, observed at discrete times, is a challenging task that has prompted the development of a number of algorithms, mainly within the computational statistics community. We propose a new direction, and accompan ying methodology, borrowing ideas from statistical physics and computational chemistry, for inferring the posterior distribution of latent diffusion paths and model parameters, given observations of the process. Joint configurations of the underlying process noise and of parameters, mapping onto diffusion paths consistent with observations, form an implicitly defined manifold. Then, by making use of a constrained Hamiltonian Monte Carlo algorithm on the embedded manifold, we are able to perform computationally efficient inference for an extensive class of discretely observed diffusion models. Critically, in contrast with other approaches proposed in the literature, our methodology is highly automated, requiring minimal user intervention and applying alike in a range of settings, including: elliptic or hypo-elliptic systems; observations with or without noise; linear or non-linear observation operators. Exploiting Markovianity, we propose a variant of the method with complexity that scales linearly in the resolution of path discretisation and the number of observation times.
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation-maximization (SAEM) algorithm to maximize the likelihood function with the novelty of usin g approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo (SMC) methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.
SDRcausal is a package that implements sufficient dimension reduction methods for causal inference as proposed in Ghosh, Ma, and de Luna (2021). The package implements (augmented) inverse probability weighting and outcome regression (imputation) esti mators of an average treatment effect (ATE) parameter. Nuisance models, both treatment assignment probability given the covariates (propensity score) and outcome regression models, are fitted by using semiparametric locally efficient dimension reduction estimators, thereby allowing for large sets of confounding covariates. Techniques including linear extrapolation, numerical differentiation, and truncation have been used to obtain a practicable implementation of the methods. Finding the suitable dimension reduction map (central mean subspace) requires solving an optimization problem, and several optimization algorithms are given as choices to the user. The package also provides estimators of the asymptotic variances of the causal effect estimators implemented. Plotting options are provided. The core of the methods are implemented in C language, and parallelization is allowed for. The user-friendly and freeware R language is used as interface. The package can be downloaded from Github repository: https://github.com/stat4reg.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا