Do you want to publish a course? Click here

Metastable Markov chains

258   0   0.0 ( 0 )
 Added by Claudio Landim
 Publication date 2018
  fields Physics
and research's language is English
 Authors C. Landim




Ask ChatGPT about the research

We review recent results on the metastable behavior of continuous-time Markov chains derived through the characterization of Markov chains as unique solutions of martingale problems.

rate research

Read More

We consider a continuous time Markov chain on a countable state space. We prove a joint large deviation principle (LDP) of the empirical measure and current in the limit of large time interval. The proof is based on results on the joint large deviations of the empirical measure and flow obtained in cite{BFG}. By improving such results we also show, under additional assumptions, that the LDP holds with the strong L^1 topology on the space of currents. We deduce a general version of the Gallavotti-Cohen (GC) symmetry for the current field and show that it implies the so-called fluctuation theorem for the GC functional. We also analyze the large deviation properties of generalized empirical currents associated to a fundamental basis in the cycle space, which, as we show, are given by the first class homological coefficients in the graph underlying the Markov chain. Finally, we discuss in detail some examples.
We recover the Donsker-Varadhan large deviations principle (LDP) for the empirical measure of a continuous time Markov chain on a countable (finite or infinite) state space from the joint LDP for the empirical measure and the empirical flow proved in [2].
Computing the stationary distributions of a continuous-time Markov chain (CTMC) involves solving a set of linear equations. In most cases of interest, the number of equations is infinite or too large, and the equations cannot be solved analytically or numerically. Several approximation schemes overcome this issue by truncating the state space to a manageable size. In this review, we first give a comprehensive theoretical account of the stationary distributions and their relation to the long-term behaviour of CTMCs that is readily accessible to non-experts and free of irreducibility assumptions made in standard texts. We then review truncation-based approximation schemes for CTMCs with infinite state spaces paying particular attention to the schemes convergence and the errors they introduce, and we illustrate their performance with an example of a stochastic reaction network of relevance in biology and chemistry. We conclude by discussing computational trade-offs associated with error control and several open questions.
Dealing with finite Markov chains in discrete time, the focus often lies on convergence behavior and one tries to make different copies of the chain meet as fast as possible and then stick together. There is, however, a very peculiar kind of discrete finite Markov chain, for which two copies started in different states can be coupled to meet almost surely in finite time, yet their distributions keep a total variation distance bounded away from 0, even in the limit as time goes off to infinity. We show that the supremum of total variation distance kept in this context is $frac12$.
We introduce the space of virtual Markov chains (VMCs) as a projective limit of the spaces of all finite state space Markov chains (MCs), in the same way that the space of virtual permutations is the projective limit of the spaces of all permutations of finite sets. We introduce the notions of virtual initial distribution (VID) and a virtual transition matrix (VTM), and we show that the law of any VMC is uniquely characterized by a pair of a VID and VTM which have to satisfy a certain compatibility condition. Lastly, we study various properties of compact convex sets associated to the theory of VMCs, including that the Birkhoff-von Neumann theorem fails in the virtual setting.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا