Do you want to publish a course? Click here

On the asymptotic variance of reversible Markov chain without cycles

61   0   0.0 ( 0 )
 Added by Chi-Hao Wu
 Publication date 2017
  fields
and research's language is English




Ask ChatGPT about the research

Markov chain Monte Carlo(MCMC) is a popular approach to sample from high dimensional distributions, and the asymptotic variance is a commonly used criterion to evaluate the performance. While most popular MCMC algorithms are reversible, there is a growing literature on the development and analyses of nonreversible MCMC. Chen and Hwang(2013) showed that a reversible MCMC can be improved by adding an antisymmetric perturbation. They also raised a conjecture that it can not be improved if there is no cycle in the corresponding graph. In this paper, we present a rigorous proof of this conjecture. The proof is based on the fact that the transition matrix with an acyclic structure will produce minimum commute time between vertices.



rate research

Read More

It is commonly admitted that non-reversible Markov chain Monte Carlo (MCMC) algorithms usually yield more accurate MCMC estimators than their reversible counterparts. In this note, we show that in addition to their variance reduction effect, some non-reversible MCMC algorithms have also the undesirable property to slow down the convergence of the Markov chain. This point, which has been overlooked by the literature, has obvious practical implications. We illustrate this phenomenon for different non-reversib
We present a convex-concave reformulation of the reversible Markov chain estimation problem and outline an efficient numerical scheme for the solution of the resulting problem based on a primal-dual interior point method for monotone variational inequalities. Extensions to situations in which information about the stationary vector is available can also be solved via the convex- concave reformulation. The method can be generalized and applied to the discrete transition matrix reweighting analysis method to perform inference from independent chains with specified couplings between the stationary probabilities. The proposed approach offers a significant speed-up compared to a fixed-point iteration for a number of relevant applications.
75 - Michael C.H. Choi 2019
Motivated by the notion of resistance distance on graph, we define a new resistance distance between two states on a given finite ergodic Markov chain based on its fundamental matrix. We prove a few equivalent formulations and discuss its relation with other parameters of the Markov chain such as its group inverse, stationary distribution, eigenvalues or hitting time. In addition, building upon existing sum rules for the hitting time of Markov chain, we give sum rules of this new resistance distance of Markov chains that resembles the sum rules of the resistance distance on graph. This yields Markov chain counterparts of various classical formulae such as Fosters first formula or the Kirchhoff index formulae.
In this paper, we develop an in-depth analysis of non-reversible Markov chains on denumerable state space from a similarity orbit perspective. In particular, we study the class of Markov chains whose transition kernel is in the similarity orbit of a normal transition kernel, such as the one of birth-death chains or reversible Markov chains. We start by identifying a set of sufficient conditions for a Markov chain to belong to the similarity orbit of a birth-death one. As by-products, we obtain a spectral representation in terms of non-self-adjoint resolutions of identity in the sense of Dunford [21] and offer a detailed analysis on the convergence rate, separation cutoff and ${rm{L}}^2$-cutoff of this class of non-reversible Markov chains. We also look into the problem of estimating the integral functionals from discrete observations for this class. In the last part of this paper, we investigate a particular similarity orbit of reversible Markov kernels, that we call the pure birth orbit, and analyze various possibly non-reversible variants of classical birth-death processes in this orbit.
91 - A. Bovier 2000
We study a large class of reversible Markov chains with discrete state space and transition matrix $P_N$. We define the notion of a set of {it metastable points} as a subset of the state space $G_N$ such that (i) this set is reached from any point $xin G_N$ without return to x with probability at least $b_N$, while (ii) for any two point x,y in the metastable set, the probability $T^{-1}_{x,y}$ to reach y from x without return to x is smaller than $a_N^{-1}ll b_N$. Under some additional non-degeneracy assumption, we show that in such a situation: item{(i)} To each metastable point corresponds a metastable state, whose mean exit time can be computed precisely. item{(ii)} To each metastable point corresponds one simple eigenvalue of $1-P_N$ which is essentially equal to the inverse mean exit time from this state. The corresponding eigenfunctions are close to the indicator function of the support of the metastable state. Moreover, these results imply very sharp uniform control of the deviation of the probability distribution of metastable exit times from the exponential distribution.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا