Do you want to publish a course? Click here

The Coupling/Minorization/Drift Approach to Markov Chain Convergence Rates

122   0   0.0 ( 0 )
 Added by Tong Liu
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

This review paper provides an introduction of Markov chains and their convergence rates which is an important and interesting mathematical topic which also has important applications for very widely used Markov chain Monte Carlo (MCMC) algorithm. We first discuss eigenvalue analysis for Markov chains on finite state spaces. Then, using the coupling construction, we prove two quantitative bounds based on minorization condition and drift conditions, and provide descriptive and intuitive examples to showcase how these theorems can be implemented in practice. This paper is meant to provide a general overview of the subject and spark interest in new Markov chain research areas.



rate research

Read More

We give qualitative and quantitative improvements to theorems which enable significance testing in Markov Chains, with a particular eye toward the goal of enabling strong, interpretable, and statistically rigorous claims of political gerrymandering. Our results can be used to demonstrate at a desired significance level that a given Markov Chain state (e.g., a districting) is extremely unusual (rather than just atypical) with respect to the fragility of its characteristics in the chain. We also provide theorems specialized to leverage quantitative improvements when there is a product structure in the underlying probability space, as can occur due to geographical constraints on districtings.
We consider the connections among `clumped residual allocation models (RAMs), a general class of stick-breaking processes including Dirichlet processes, and the occupation laws of certain discrete space time-inhomogeneous Markov chains related to simulated annealing and other applications. An intermediate structure is introduced in a given RAM, where proportions between successive indices in a list are added or clumped together to form another RAM. In particular, when the initial RAM is a Griffiths-Engen-McCloskey (GEM) sequence and the indices are given by the random times that an auxiliary Markov chain jumps away from its current state, the joint law of the intermediate RAM and the locations visited in the sojourns is given in terms of a `disordered GEM sequence, and an induced Markov chain. Through this joint law, we identify a large class of `stick breaking processes as the limits of empirical occupation measures for associated time-inhomogeneous Markov chains.
We establish a quantitative version of the Tracy--Widom law for the largest eigenvalue of high dimensional sample covariance matrices. To be precise, we show that the fluctuations of the largest eigenvalue of a sample covariance matrix $X^*X$ converge to its Tracy--Widom limit at a rate nearly $N^{-1/3}$, where $X$ is an $M times N$ random matrix whose entries are independent real or complex random variables, assuming that both $M$ and $N$ tend to infinity at a constant rate. This result improves the previous estimate $N^{-2/9}$ obtained by Wang [73]. Our proof relies on a Green function comparison method [27] using iterative cumulant expansions, the local laws for the Green function and asymptotic properties of the correlation kernel of the white Wishart ensemble.
We study the rate of convergence of the Mallows distance between the empirical distribution of a sample and the underlying population. The surprising feature of our results is that the convergence rate is slower in the discrete case than in the absolutely continuous setting. We show how the hazard function plays a significant role in these calculations. As an application, we recall that the quantity studied provides an upper bound on the distance between the bootstrap distribution of a sample mean and its true sampling distribution. Moreover, the convenient properties of the Mallows metric yield a straightforward lower bound, and therefore a relatively precise description of the asymptotic performance of the bootstrap in this problem.
We analyze the convergence properties of the Wang-Landau algorithm. This sampling method belongs to the general class of adaptive importance sampling strategies which use the free energy along a chosen reaction coordinate as a bias. Such algorithms are very helpful to enhance the sampling properties of Markov Chain Monte Carlo algorithms, when the dynamics is metastable. We prove the convergence of the Wang-Landau algorithm and an associated central limit theorem.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا