ترغب بنشر مسار تعليمي؟ اضغط هنا

Driving Markov chain Monte Carlo with a dependent random stream

197   0   0.0 ( 0 )
 نشر من قبل Iain Murray
 تاريخ النشر 2012
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Markov chain Monte Carlo is a widely-used technique for generating a dependent sequence of samples from complex distributions. Conventionally, these methods require a source of independent random variates. Most implementations use pseudo-random numbers instead because generating true independent variates with a physical system is not straightforward. In this paper we show how to modify some commonly used Markov chains to use a dependent stream of random numbers in place of independent uniform variates. The resulting Markov chains have the correct invariant distribution without requiring detailed knowledge of the streams dependencies or even its marginal distribution. As a side-effect, sometimes far fewer random numbers are required to obtain accurate results.



قيم البحث

اقرأ أيضاً

An important task in machine learning and statistics is the approximation of a probability measure by an empirical measure supported on a discrete point set. Stein Points are a class of algorithms for this task, which proceed by sequentially minimisi ng a Stein discrepancy between the empirical measure and the target and, hence, require the solution of a non-convex optimisation problem to obtain each new point. This paper removes the need to solve this optimisation problem by, instead, selecting each new point based on a Markov chain sample path. This significantly reduces the computational cost of Stein Points and leads to a suite of algorithms that are straightforward to implement. The new algorithms are illustrated on a set of challenging Bayesian inference problems, and rigorous theoretical guarantees of consistency are established.
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an ext ended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers, and a single PMCMC sampler with an equivalent memory and computational budget. An additional advantage of the iPMCMC method is that it is suitable for distributed and multi-core architectures.
We introduce an ensemble Markov chain Monte Carlo approach to sampling from a probability density with known likelihood. This method upgrades an underlying Markov chain by allowing an ensemble of such chains to interact via a process in which one cha ins state is cloned as anothers is deleted. This effective teleportation of states can overcome issues of metastability in the underlying chain, as the scheme enjoys rapid mixing once the modes of the target density have been populated. We derive a mean-field limit for the evolution of the ensemble. We analyze the global and local convergence of this mean-field limit, showing asymptotic convergence independent of the spectral gap of the underlying Markov chain, and moreover we interpret the limiting evolution as a gradient flow. We explain how interaction can be applied selectively to a subset of state variables in order to maintain advantage on very high-dimensional problems. Finally we present the application of our methodology to Bayesian hyperparameter estimation for Gaussian process regression.
In this article we consider computing expectations w.r.t.~probability laws associated to a certain class of stochastic systems. In order to achieve such a task, one must not only resort to numerical approximation of the expectation, but also to a bia sed discretization of the associated probability. We are concerned with the situation for which the discretization is required in multiple dimensions, for instance in space and time. In such contexts, it is known that the multi-index Monte Carlo (MIMC) method can improve upon i.i.d.~sampling from the most accurate approximation of the probability law. Indeed by a non-trivial modification of the multilevel Monte Carlo (MLMC) method and it can reduce the work to obtain a given level of error, relative to the afore mentioned i.i.d.~sampling and relative even to MLMC. In this article we consider the case when such probability laws are too complex to sampled independently. We develop a modification of the MIMC method which allows one to use standard Markov chain Monte Carlo (MCMC) algorithms to replace independent and coupled sampling, in certain contexts. We prove a variance theorem which shows that using our MIMCMC method is preferable, in the sense above, to i.i.d.~sampling from the most accurate approximation, under assumptions. The method is numerically illustrated on a problem associated to a stochastic partial differential equation (SPDE).
106 - Vivekananda Roy 2019
Markov chain Monte Carlo (MCMC) is one of the most useful approaches to scientific computing because of its flexible construction, ease of use and generality. Indeed, MCMC is indispensable for performing Bayesian analysis. Two critical questions that MCMC practitioners need to address are where to start and when to stop the simulation. Although a great amount of research has gone into establishing convergence criteria and stopping rules with sound theoretical foundation, in practice, MCMC users often decide convergence by applying empirical diagnostic tools. This review article discusses the most widely used MCMC convergence diagnostic tools. Some recently proposed stopping rules with firm theoretical footing are also presented. The convergence diagnostics and stopping rules are illustrated using three detailed examples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا