ترغب بنشر مسار تعليمي؟ اضغط هنا

Sampling latent states for high-dimensional non-linear state space models with the embedded HMM method

70   0   0.0 ( 0 )
 نشر من قبل Radford M. Neal
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a new scheme for selecting pool states for the embedded Hidden Markov Model (HMM) Markov Chain Monte Carlo (MCMC) method. This new scheme allows the embedded HMM method to be used for efficient sampling in state space models where the state can be high-dimensional. Previously, embedded HMM methods were only applied to models with a one-dimensional state space. We demonstrate that using our proposed pool state selection scheme, an embedded HMM sampler can have similar performance to a well-tuned sampler that uses a combination of Particle Gibbs with Backward Sampling (PGBS) and Metropolis updates. The scaling to higher dimensions is made possible by selecting pool states locally near the current value of the state sequence. The proposed pool state selection scheme also allows each iteration of the embedded HMM sampler to take time linear in the number of the pool states, as opposed to quadratic as in the original embedded HMM sampler. We also consider a model with a multimodal posterior, and show how a technique we term mirroring can be used to efficiently move between the modes.

قيم البحث

اقرأ أيضاً

Non-linear state space models are a widely-used class of models for biological, economic, and physical processes. Fitting these models to observed data is a difficult inference problem that has no straightforward solution. We take a Bayesian approach to the inference of unknown parameters of a non-linear state model; this, in turn, requires the availability of efficient Markov Chain Monte Carlo (MCMC) sampling methods for the latent (hidden) variables and model parameters. Using the ensemble technique of Neal (2010) and the embedded HMM technique of Neal (2003), we introduce a new Markov Chain Monte Carlo method for non-linear state space models. The key idea is to perform parameter updates conditional on an enormously large ensemble of latent sequences, as opposed to a single sequence, as with existing methods. We look at the performance of this ensemble method when doing Bayesian inference in the Ricker model of population dynamics. We show that for this problem, the ensemble method is vastly more efficient than a simple Metropolis method, as well as 1.9 to 12.0 times more efficient than a single-sequence embedded HMM method, when all methods are tuned appropriately. We also introduce a way of speeding up the ensemble method by performing partial backward passes to discard poor proposals at low computational cost, resulting in a final efficiency gain of 3.4 to 20.4 times over the single-sequence method.
We consider Particle Gibbs (PG) as a tool for Bayesian analysis of non-linear non-Gaussian state-space models. PG is a Monte Carlo (MC) approximation of the standard Gibbs procedure which uses sequential MC (SMC) importance sampling inside the Gibbs procedure to update the latent and potentially high-dimensional state trajectories. We propose to combine PG with a generic and easily implementable SMC approach known as Particle Efficient Importance Sampling (PEIS). By using SMC importance sampling densities which are approximately fully globally adapted to the targeted density of the states, PEIS can substantially improve the mixing and the efficiency of the PG draws from the posterior of the states and the parameters relative to existing PG implementations. The efficiency gains achieved by PEIS are illustrated in PG applications to a univariate stochastic volatility model for asset returns, a non-Gaussian nonlinear local-level model for interest rates, and a multivariate stochastic volatility model for the realized covariance matrix of asset returns.
Given data, deep generative models, such as variational autoencoders (VAE) and generative adversarial networks (GAN), train a lower dimensional latent representation of the data space. The linear Euclidean geometry of data space pulls back to a nonli near Riemannian geometry on the latent space. The latent space thus provides a low-dimensional nonlinear representation of data and classical linear statistical techniques are no longer applicable. In this paper we show how statistics of data in their latent space representation can be performed using techniques from the field of nonlinear manifold statistics. Nonlinear manifold statistics provide generalizations of Euclidean statistical notions including means, principal component analysis, and maximum likelihood fits of parametric probability distributions. We develop new techniques for maximum likelihood inference in latent space, and adress the computational complexity of using geometric algorithms with high-dimensional data by training a separate neural network to approximate the Riemannian metric and cometric tensor capturing the shape of the learned data manifold.
143 - Carter T. Butts 2017
Generation of deviates from random graph models with non-trivial edge dependence is an increasingly important problem. Here, we introduce a method which allows perfect sampling from random graph models in exponential family form (exponential family r andom graph models), using a variant of Coupling From The Past. We illustrate the use of the method via an application to the Markov graphs, a family that has been the subject of considerable research. We also show how the method can be applied to a variant of the biased net models, which are not exponentially parameterized.
In this paper we introduce a new sampling algorithm which has the potential to be adopted as a universal replacement to the Metropolis--Hastings algorithm. It is related to the slice sampler, and motivated by an algorithm which is applicable to discr ete probability distributions %which can be viewed as an alternative to the Metropolis--Hastings algorithm in this setting, which obviates the need for a proposal distribution, in that is has no accept/reject component. This paper looks at the continuous counterpart. A latent variable combined with a slice sampler and a shrinkage procedure applied to uniform density functions creates a highly efficient sampler which can generate random variables from very high dimensional distributions as a single block.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا