ترغب بنشر مسار تعليمي؟ اضغط هنا

The exit time finite state projection scheme: bounding exit distributions and occupation measures of continuous-time Markov chains

121   0   0.0 ( 0 )
 نشر من قبل Juan Kuntz
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce the exit time finite state projection (ETFSP) scheme, a truncation-based method that yields approximations to the exit distribution and occupation measure associated with the time of exit from a domain (i.e., the time of first passage to the complement of the domain) of time-homogeneous continuous-time Markov chains. We prove that: (i) the computed approximations bound the measures from below; (ii) the total variation distances between the approximations and the measures decrease monotonically as states are added to the truncation; and (iii) the scheme converges, in the sense that, as the truncation tends to the entire state space, the total variation distances tend to zero. Furthermore, we give a computable bound on the total variation distance between the exit distribution and its approximation, and we delineate the cases in which the bound is sharp. We also revisit the related finite state projection scheme and give a comprehensive account of its theoretical properties. We demonstrate the use of the ETFSP scheme by applying it to two biological examples: the computation of the first passage time associated with the expression of a gene, and the fixation times of competing species subject to demographic noise.



قيم البحث

اقرأ أيضاً

Computing the stationary distributions of a continuous-time Markov chain (CTMC) involves solving a set of linear equations. In most cases of interest, the number of equations is infinite or too large, and the equations cannot be solved analytically o r numerically. Several approximation schemes overcome this issue by truncating the state space to a manageable size. In this review, we first give a comprehensive theoretical account of the stationary distributions and their relation to the long-term behaviour of CTMCs that is readily accessible to non-experts and free of irreducibility assumptions made in standard texts. We then review truncation-based approximation schemes for CTMCs with infinite state spaces paying particular attention to the schemes convergence and the errors they introduce, and we illustrate their performance with an example of a stochastic reaction network of relevance in biology and chemistry. We conclude by discussing computational trade-offs associated with error control and several open questions.
This paper investigates tail asymptotics of stationary distributions and quasi-stationary distributions of continuous-time Markov chains on a subset of the non-negative integers. A new identity for stationary measures is established. In particular, f or continuous-time Markov chains with asymptotic power-law transition rates, tail asymptotics for stationary distributions are classified into three types by three easily computable parameters: (i) Conley-Maxwell-Poisson distributions (light-tailed), (ii) exponential-tailed distributions, and (iii) heavy-tailed distributions. Similar results are derived for quasi-stationary distributions. The approach to establish tail asymptotics is different from the classical semimartingale approach. We apply our results to biochemical reaction networks (modeled as continuous-time Markov chains), a general single-cell stochastic gene expression model, an extended class of branching processes, and stochastic population processes with bursty reproduction, none of which are birth-death processes.
Continuous-time Markov chains are mathematical models that are used to describe the state-evolution of dynamical systems under stochastic uncertainty, and have found widespread applications in various fields. In order to make these models computation ally tractable, they rely on a number of assumptions that may not be realistic for the domain of application; in particular, the ability to provide exact numerical parameter assessments, and the applicability of time-homogeneity and the eponymous Markov property. In this work, we extend these models to imprecise continuous-time Markov chains (ICTMCs), which are a robust generalisation that relaxes these assumptions while remaining computationally tractable. More technically, an ICTMC is a set of precise continuous-time finite-state stochastic processes, and rather than computing expected values of functions, we seek to compute lower expectations, which are tight lower bounds on the expectations that correspond to such a set of precise models. Note that, in contrast to e.g. Bayesian methods, all the elements of such a set are treated on equal grounds; we do not consider a distribution over this set. The first part of this paper develops a formalism for describing continuous-time finite-state stochastic processes that does not require the aforementioned simplifying assumptions. Next, this formalism is used to characterise ICTMCs and to investigate their properties. The concept of lower expectation is then given an alternative operator-theoretic characterisation, by means of a lower transition operator, and the properties of this operator are investigated as well. Finally, we use this lower transition operator to derive tractable algorithms (with polynomial runtime complexity w.r.t. the maximum numerical error) for computing the lower expectation of functions that depend on the state at any finite number of time points.
In this paper we characterize the distribution of the first exit time from an arbitrary open set for a class of semi-Markov processes obtained as time-changed Markov processes. We estimate the asymptotic behaviour of the survival function (for large $t$) and of the distribution function (for small $t$) and we provide some conditions for absolute continuity. We have been inspired by a problem of neurophyshiology and our results are particularly usefull in this field, precisely for the so-called Leacky Integrate-and-Fire (LIF) models: the use of semi-Markov processes in these models appear to be realistic under several aspects, e.g., it makes the intertimes between spikes a r.v. with infinite expectation, which is a desiderable property. Hence, after the theoretical part, we provide a LIF model based on semi-Markov processes.
We recover the Donsker-Varadhan large deviations principle (LDP) for the empirical measure of a continuous time Markov chain on a countable (finite or infinite) state space from the joint LDP for the empirical measure and the empirical flow proved in [2].
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا