ترغب بنشر مسار تعليمي؟ اضغط هنا

Conditional sequential Monte Carlo in high dimensions

413   0   0.0 ( 0 )
 نشر من قبل Axel Finke
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The iterated conditional sequential Monte Carlo (i-CSMC) algorithm from Andrieu, Doucet and Holenstein (2010) is an MCMC approach for efficiently sampling from the joint posterior distribution of the $T$ latent states in challenging time-series models, e.g. in non-linear or non-Gaussian state-space models. It is also the main ingredient in particle Gibbs samplers which infer unknown model parameters alongside the latent states. In this work, we first prove that the i-CSMC algorithm suffers from a curse of dimension in the dimension of the states, $D$: it breaks down unless the number of samples (particles), $N$, proposed by the algorithm grows exponentially with $D$. Then, we present a novel local version of the algorithm which proposes particles using Gaussian random-walk moves that are suitably scaled with $D$. We prove that this iterated random-walk conditional sequential Monte Carlo (i-RW-CSMC) algorithm avoids the curse of dimension: for arbitrary $N$, its acceptance rates and expected squared jumping distance converge to non-trivial limits as $D to infty$. If $T = N = 1$, our proposed algorithm reduces to a Metropolis--Hastings or Barkers algorithm with Gaussian random-walk moves and we recover the well known scaling limits for such algorithms.



قيم البحث

اقرأ أيضاً

We propose a Markov chain Monte Carlo (MCMC) scheme to perform state inference in non-linear non-Gaussian state-space models. Current state-of-the-art methods to address this problem rely on particle MCMC techniques and its variants, such as the iter ated conditional Sequential Monte Carlo (cSMC) scheme, which uses a Sequential Monte Carlo (SMC) type proposal within MCMC. A deficiency of standard SMC proposals is that they only use observations up to time $t$ to propose states at time $t$ when an entire observation sequence is available. More sophisticated SMC based on lookahead techniques could be used but they can be difficult to put in practice. We propose here replica cSMC where we build SMC proposals for one replica using information from the entire observation sequence by conditioning on the states of the other replicas. This approach is easily parallelizable and we demonstrate its excellent empirical performance when compared to the standard iterated cSMC scheme at fixed computational complexity.
The likelihood-free sequential Approximate Bayesian Computation (ABC) algorithms, are increasingly popular inference tools for complex biological models. Such algorithms proceed by constructing a succession of probability distributions over the param eter space conditional upon the simulated data lying in an $epsilon$--ball around the observed data, for decreasing values of the threshold $epsilon$. While in theory, the distributions (starting from a suitably defined prior) will converge towards the unknown posterior as $epsilon$ tends to zero, the exact sequence of thresholds can impact upon the computational efficiency and success of a particular application. In particular, we show here that the current preferred method of choosing thresholds as a pre-determined quantile of the distances between simulated and observed data from the previous population, can lead to the inferred posterior distribution being very different to the true posterior. Threshold selection thus remains an important challenge. Here we propose an automated and adaptive method that allows us to balance the need to minimise the threshold with computational efficiency. Moreover, our method which centres around predicting the threshold - acceptance rate curve using the unscented transform, enables us to avoid local minima - a problem that has plagued previous threshold schemes.
Many problems in machine learning and statistics involve nested expectations and thus do not permit conventional Monte Carlo (MC) estimation. For such problems, one must nest estimators, such that terms in an outer estimator themselves involve calcul ation of a separate, nested, estimation. We investigate the statistical implications of nesting MC estimators, including cases of multiple levels of nesting, and establish the conditions under which they converge. We derive corresponding rates of convergence and provide empirical evidence that these rates are observed in practice. We further establish a number of pitfalls that can arise from naive nesting of MC estimators, provide guidelines about how these can be avoided, and lay out novel methods for reformulating certain classes of nested expectation problems into single expectations, leading to improved convergence rates. We demonstrate the applicability of our work by using our results to develop a new estimator for discrete Bayesian experimental design problems and derive error bounds for a class of variational objectives.
215 - Ajay Jasra , Kody Law , 2017
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
In this article, we analyze Hamiltonian Monte Carlo (HMC) by placing it in the setting of Riemannian geometry using the Jacobi metric, so that each step corresponds to a geodesic on a suitable Riemannian manifold. We then combine the notion of curvat ure of a Markov chain due to Joulin and Ollivier with the classical sectional curvature from Riemannian geometry to derive error bounds for HMC in important cases, where we have positive curvature. These cases include several classical distributions such as multivariate Gaussians, and also distributions arising in the study of Bayesian image registration. The theoretical development suggests the sectional curvature as a new diagnostic tool for convergence for certain Markov chains.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا