Let 0<alpha<1/2. We show that the mixing time of a continuous-time reversible Markov chain on a finite state space is about as large as the largest expected hitting time of a subset of stationary measure at least alpha of the state space. Suitably modified results hold in discrete time and/or without the reversibility assumption. The key technical tool is a construction of a random set A such that the hitting time of A is both light-tailed and a stationary time for the chain. We note that essentially the same results were obtained independently by Peres and Sousi [arXiv:1108.0133].
For a finite state Markov process and a finite collection ${ Gamma_k, k in K }$ of subsets of its state space, let $tau_k$ be the first time the process visits the set $Gamma_k$. We derive explicit/recursive formulas for the joint density and tail probabilities of the stopping times ${ tau_k, k in K}$. The formulas are natural generalizations of those associated with the jump times of a simple Poisson process. We give a numerical example and indicate the relevance of our results to credit risk modeling.
We generalize the notion of strong stationary time and we give a representation formula for the hitting time to a target set in the general case of non-reversible Markov processes.
In the setting of non-reversible Markov chains on finite or countable state space, exact results on the distribution of the first hitting time to a given set $G$ are obtained. A new notion of strong metastability time is introduced to describe the local relaxation time. This time is defined via a generalization of the strong stationary time to a conditionally strong quasi-stationary time(CSQST). Rarity of the target set $G$ is not required and the initial distribution can be completely general. The results clarify the the role played by the initial distribution on the exponential law; they are used to give a general notion of metastability and to discuss the relation between the exponential distribution of the first hitting time and metastability.
For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process -- the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift.
The use of Lyapunov conditions for proving functional inequalities was initiated in [5]. It was shown in [4, 30] that there is an equivalence between a Poincar{e} inequality, the existence of some Lyapunov function and the exponential integrability of hitting times. In the present paper, we close the scheme of the interplay between Lyapunov conditions and functional inequalities by $bullet$ showing that strong functional inequalities are equivalent to Lyapunov type conditions; $bullet$ showing that these Lyapunov conditions are characterized by the finiteness of generalized exponential moments of hitting times. We also give some complement concerning the link between Lyapunov conditions and in-tegrability property of the invariant probability measure and as such transportation inequalities , and we show that some unbounded Lyapunov conditions can lead to uniform ergodicity, and coming down from infinity property.