ترغب بنشر مسار تعليمي؟ اضغط هنا

Delayed rejection schemes for efficient Markov-Chain Monte-Carlo sampling of multimodal distributions

251   0   0.0 ( 0 )
 نشر من قبل Miquel Trias
 تاريخ النشر 2009
والبحث باللغة English




اسأل ChatGPT حول البحث

A number of problems in a variety of fields are characterised by target distributions with a multimodal structure in which the presence of several isolated local maxima dramatically reduces the efficiency of Markov Chain Monte Carlo sampling algorithms. Several solutions, such as simulated tempering or the use of parallel chains, have been proposed to facilitate the exploration of the relevant parameter space. They provide effective strategies in the cases in which the dimension of the parameter space is small and/or the computational costs are not a limiting factor. These approaches fail however in the case of high-dimensional spaces where the multimodal structure is induced by degeneracies between regions of the parameter space. In this paper we present a fully Markovian way to efficiently sample this kind of distribution based on the general Delayed Rejection scheme with an arbitrary number of steps, and provide details for an efficient numerical implementation of the algorithm.

قيم البحث

اقرأ أيضاً

A novel strategy that combines a given collection of reversible Markov kernels is proposed. It consists in a Markov chain that moves, at each iteration, according to one of the available Markov kernels selected via a state-dependent probability distr ibution which is thus dubbed locally informed. In contrast to random-scan approaches that assume a constant selection probability distribution, the state-dependent distribution is typically specified so as to privilege moving according to a kernel which is relevant for the local topology of the target distribution. The second contribution is to characterize situations where a locally informed strategy should be preferred to its random-scan counterpart. We find that for a specific class of target distribution, referred to as sparse and filamentary, that exhibits a strong correlation between some variables and/or which concentrates its probability mass on some low dimensional linear subspaces or on thinned curved manifolds, a locally informed strategy converges substantially faster and yields smaller asymptotic variances than an equivalent random-scan algorithm. The research is at this stage essentially speculative: this paper combines a series of observations on this topic, both theoretical and empirical, that could serve as a groundwork for further investigations.
A novel class of non-reversible Markov chain Monte Carlo schemes relying on continuous-time piecewise-deterministic Markov Processes has recently emerged. In these algorithms, the state of the Markov process evolves according to a deterministic dynam ics which is modified using a Markov transition kernel at random event times. These methods enjoy remarkable features including the ability to update only a subset of the state components while other components implicitly keep evolving and the ability to use an unbiased estimate of the gradient of the log-target while preserving the target as invariant distribution. However, they also suffer from important limitations. The deterministic dynamics used so far do not exploit the structure of the target. Moreover, exact simulation of the event times is feasible for an important yet restricted class of problems and, even when it is, it is application specific. This limits the applicability of these techniques and prevents the development of a generic software implementation of them. We introduce novel MCMC methods addressing these shortcomings. In particular, we introduce novel continuous-time algorithms relying on exact Hamiltonian flows and novel non-reversible discrete-time algorithms which can exploit complex dynamics such as approximate Hamiltonian dynamics arising from symplectic integrators while preserving the attractive features of continuous-time algorithms. We demonstrate the performance of these schemes on a variety of applications.
Delayed-acceptance Markov chain Monte Carlo (DA-MCMC) samples from a probability distribution via a two-stages version of the Metropolis-Hastings algorithm, by combining the target distribution with a surrogate (i.e. an approximate and computationall y cheaper version) of said distribution. DA-MCMC accelerates MCMC sampling in complex applications, while still targeting the exact distribution. We design a computationally faster, albeit approximate, DA-MCMC algorithm. We consider parameter inference in a Bayesian setting where a surrogate likelihood function is introduced in the delayed-acceptance scheme. When the evaluation of the likelihood function is computationally intensive, our scheme produces a 2-4 times speed-up, compared to standard DA-MCMC. However, the acceleration is highly problem dependent. Inference results for the standard delayed-acceptance algorithm and our approximated version are similar, indicating that our algorithm can return reliable Bayesian inference. As a computationally intensive case study, we introduce a novel stochastic differential equation model for protein folding data.
We propose Adaptive Incremental Mixture Markov chain Monte Carlo (AIMM), a novel approach to sample from challenging probability distributions defined on a general state-space. While adaptive MCMC methods usually update a parametric proposal kernel w ith a global rule, AIMM locally adapts a semiparametric kernel. AIMM is based on an independent Metropolis-Hastings proposal distribution which takes the form of a finite mixture of Gaussian distributions. Central to this approach is the idea that the proposal distribution adapts to the target by locally adding a mixture component when the discrepancy between the proposal mixture and the target is deemed to be too large. As a result, the number of components in the mixture proposal is not fixed in advance. Theoretically, we prove that there exists a process that can be made arbitrarily close to AIMM and that converges to the correct target distribution. We also illustrate that it performs well in practice in a variety of challenging situations, including high-dimensional and multimodal target distributions.
We present a Markov-chain Monte-Carlo (MCMC) technique to study the source parameters of gravitational-wave signals from the inspirals of stellar-mass compact binaries detected with ground-based gravitational-wave detectors such as LIGO and Virgo, fo r the case where spin is present in the more massive compact object in the binary. We discuss aspects of the MCMC algorithm that allow us to sample the parameter space in an efficient way. We show sample runs that illustrate the possibilities of our MCMC code and the difficulties that we encounter.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا