Do you want to publish a course? Click here

On Bayesian inferential tasks with infinite-state jump processes: efficient data augmentation

189   0   0.0 ( 0 )
 Added by Theodore Kypraios
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Advances in sampling schemes for Markov jump processes have recently enabled multiple inferential tasks. However, in statistical and machine learning applications, we often require that these continuous-time models find support on structured and infinite state spaces. In these cases, exact sampling may only be achieved by often inefficient particle filtering procedures, and rapidly augmenting observed datasets remains a significant challenge. Here, we build on the principles of uniformization and present a tractable framework to address this problem, which greatly improves the efficiency of existing state-of-the-art methods commonly used in small finite-state systems, and further scales their use to infinite-state scenarios. We capitalize on the marginal role of variable subsets in a model hierarchy during the process jumps, and describe an algorithm that relies on measurable mappings between pairs of states and carefully designed sets of synthetic jump observations. The proposed method enables the efficient integration of slice sampling techniques and it can overcome the existing computational bottleneck. We offer evidence by means of experiments addressing inference and clustering tasks on both simulated and real data sets.

rate research

Read More

We introduce the UPG package for highly efficient Bayesian inference in probit, logit, multinomial logit and binomial logit models. UPG offers a convenient estimation framework for balanced and imbalanced data settings where sampling efficiency is ensured through Markov chain Monte Carlo boosting methods. All sampling algorithms are implemented in C++, allowing for rapid parameter estimation. In addition, UPG provides several methods for fast production of output tables and summary plots that are easily accessible to a broad range of users.
We consider Particle Gibbs (PG) as a tool for Bayesian analysis of non-linear non-Gaussian state-space models. PG is a Monte Carlo (MC) approximation of the standard Gibbs procedure which uses sequential MC (SMC) importance sampling inside the Gibbs procedure to update the latent and potentially high-dimensional state trajectories. We propose to combine PG with a generic and easily implementable SMC approach known as Particle Efficient Importance Sampling (PEIS). By using SMC importance sampling densities which are approximately fully globally adapted to the targeted density of the states, PEIS can substantially improve the mixing and the efficiency of the PG draws from the posterior of the states and the parameters relative to existing PG implementations. The efficiency gains achieved by PEIS are illustrated in PG applications to a univariate stochastic volatility model for asset returns, a non-Gaussian nonlinear local-level model for interest rates, and a multivariate stochastic volatility model for the realized covariance matrix of asset returns.
This survey covers state-of-the-art Bayesian techniques for the estimation of mixtures. It complements the earlier Marin, Mengersen and Robert (2005) by studying new types of distributions, the multinomial, latent class and t distributions. It also exhibits closed form solutions for Bayesian inference in some discrete setups. Lastly, it sheds a new light on the computation of Bayes factors via the approximation of Chib (1995).
We study the class of state-space models and perform maximum likelihood estimation for the model parameters. We consider a stochastic approximation expectation-maximization (SAEM) algorithm to maximize the likelihood function with the novelty of using approximate Bayesian computation (ABC) within SAEM. The task is to provide each iteration of SAEM with a filtered state of the system, and this is achieved using an ABC sampler for the hidden state, based on sequential Monte Carlo (SMC) methodology. It is shown that the resulting SAEM-ABC algorithm can be calibrated to return accurate inference, and in some situations it can outperform a version of SAEM incorporating the bootstrap filter. Two simulation studies are presented, first a nonlinear Gaussian state-space model then a state-space model having dynamics expressed by a stochastic differential equation. Comparisons with iterated filtering for maximum likelihood inference, and Gibbs sampling and particle marginal methods for Bayesian inference are presented.
Modeling binary and categorical data is one of the most commonly encountered tasks of applied statisticians and econometricians. While Bayesian methods in this context have been available for decades now, they often require a high level of familiarity with Bayesian statistics or suffer from issues such as low sampling efficiency. To contribute to the accessibility of Bayesian models for binary and categorical data, we introduce novel latent variable representations based on Polya Gamma random variables for a range of commonly encountered discrete choice models. From these latent variable representations, new Gibbs sampling algorithms for binary, binomial and multinomial logistic regression models are derived. All models allow for a conditionally Gaussian likelihood representation, rendering extensions to more complex modeling frameworks such as state space models straight-forward. However, sampling efficiency may still be an issue in these data augmentation based estimation frameworks. To counteract this, MCMC boosting strategies are developed and discussed in detail. The merits of our approach are illustrated through extensive simulations and a real data application.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا