Do you want to publish a course? Click here

Adaptive Incremental Mixture Markov chain Monte Carlo

205   0   0.0 ( 0 )
 Added by Florian Maire
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

We propose Adaptive Incremental Mixture Markov chain Monte Carlo (AIMM), a novel approach to sample from challenging probability distributions defined on a general state-space. While adaptive MCMC methods usually update a parametric proposal kernel with a global rule, AIMM locally adapts a semiparametric kernel. AIMM is based on an independent Metropolis-Hastings proposal distribution which takes the form of a finite mixture of Gaussian distributions. Central to this approach is the idea that the proposal distribution adapts to the target by locally adding a mixture component when the discrepancy between the proposal mixture and the target is deemed to be too large. As a result, the number of components in the mixture proposal is not fixed in advance. Theoretically, we prove that there exists a process that can be made arbitrarily close to AIMM and that converges to the correct target distribution. We also illustrate that it performs well in practice in a variety of challenging situations, including high-dimensional and multimodal target distributions.



rate research

Read More

142 - Guanyu Hu , HaiYing Wang 2020
Markov Chain Monte Carlo (MCMC) requires to evaluate the full data likelihood at different parameter values iteratively and is often computationally infeasible for large data sets. In this paper, we propose to approximate the log-likelihood with subsamples taken according to nonuniform subsampling probabilities, and derive the most likely optimal (MLO) subsampling probabilities for better approximation. Compared with existing subsampled MCMC algorithm with equal subsampling probabilities, our MLO subsampled MCMC has a higher estimation efficiency with the same subsampling ratio. We also derive a formula using the asymptotic distribution of the subsampled log-likelihood to determine the required subsample size in each MCMC iteration for a given level of precision. This formula is used to develop an adaptive version of the MLO subsampled MCMC algorithm. Numerical experiments demonstrate that the proposed method outperforms the uniform subsampled MCMC.
In this paper, we study the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different $pi$-reversible Markov transition kernels $P$ and $Q$. More specifically, our main result allows us to compare directly the asymptotic variances of two inhomogeneous Markov chains associated with different kernels $P_i$ and $Q_i$, $iin{0,1}$, as soon as the kernels of each pair $(P_0,P_1)$ and $(Q_0,Q_1)$ can be ordered in the sense of lag-one autocovariance. As an important application, we use this result for comparing different data-augmentation-type Metropolis-Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis-Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm.
A novel class of non-reversible Markov chain Monte Carlo schemes relying on continuous-time piecewise-deterministic Markov Processes has recently emerged. In these algorithms, the state of the Markov process evolves according to a deterministic dynamics which is modified using a Markov transition kernel at random event times. These methods enjoy remarkable features including the ability to update only a subset of the state components while other components implicitly keep evolving and the ability to use an unbiased estimate of the gradient of the log-target while preserving the target as invariant distribution. However, they also suffer from important limitations. The deterministic dynamics used so far do not exploit the structure of the target. Moreover, exact simulation of the event times is feasible for an important yet restricted class of problems and, even when it is, it is application specific. This limits the applicability of these techniques and prevents the development of a generic software implementation of them. We introduce novel MCMC methods addressing these shortcomings. In particular, we introduce novel continuous-time algorithms relying on exact Hamiltonian flows and novel non-reversible discrete-time algorithms which can exploit complex dynamics such as approximate Hamiltonian dynamics arising from symplectic integrators while preserving the attractive features of continuous-time algorithms. We demonstrate the performance of these schemes on a variety of applications.
Markov chain Monte Carlo (MCMC) produces a correlated sample for estimating expectations with respect to a target distribution. A fundamental question is when should sampling stop so that we have good estimates of the desired quantities? The key to answering this question lies in assessing the Monte Carlo error through a multivariate Markov chain central limit theorem (CLT). The multivariate nature of this Monte Carlo error largely has been ignored in the MCMC literature. We present a multivariate framework for terminating simulation in MCMC. We define a multivariate effective sample size, estimating which requires strongly consistent estimators of the covariance matrix in the Markov chain CLT; a property we show for the multivariate batch means estimator. We then provide a lower bound on the number of minimum effective samples required for a desired level of precision. This lower bound depends on the problem only in the dimension of the expectation being estimated, and not on the underlying stochastic process. This result is obtained by drawing a connection between terminating simulation via effective sample size and terminating simulation using a relative standard deviation fixed-volume sequential stopping rule; which we demonstrate is an asymptotically valid procedure. The finite sample properties of the proposed method are demonstrated in a variety of examples.
This paper proposes a family of weighted batch means variance estimators, which are computationally efficient and can be conveniently applied in practice. The focus is on Markov chain Monte Carlo simulations and estimation of the asymptotic covariance matrix in the Markov chain central limit theorem, where conditions ensuring strong consistency are provided. Finite sample performance is evaluated through auto-regressive, Bayesian spatial-temporal, and Bayesian logistic regression examples, where the new estimators show significant computational gains with a minor sacrifice in variance compared with existing methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا