ترغب بنشر مسار تعليمي؟ اضغط هنا

Markov processes conditioned on their location at large exponential times

81   0   0.0 ( 0 )
 نشر من قبل Alexandru Hening
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Suppose that $(X_t)_{t ge 0}$ is a one-dimensional Brownian motion with negative drift $-mu$. It is possible to make sense of conditioning this process to be in the state $0$ at an independent exponential random time and if we kill the conditioned process at the exponential time the resulting process is Markov. If we let the rate parameter of the random time go to $0$, then the limit of the killed Markov process evolves like $X$ conditioned to hit $0$, after which time it behaves as $X$ killed at the last time $X$ visits $0$. Equivalently, the limit process has the dynamics of the killed bang--bang Brownian motion that evolves like Brownian motion with positive drift $+mu$ when it is negative, like Brownian motion with negative drift $-mu$ when it is positive, and is killed according to the local time spent at $0$. An extension of this result holds in great generality for Borel right processes conditioned to be in some state $a$ at an exponential random time, at which time they are killed. Our proofs involve understanding the Campbell measures associated with local times, the use of excursion theory, and the development of a suitable analogue of the bang--bang construction for general Markov processes. As examples, we consider the special case when the transient Borel right process is a one-dimensional diffusion. Characterizing the limiting conditioned and killed process via its infinitesimal generator leads to an investigation of the $h$-transforms of transient one-dimensional diffusion processes that goes beyond what is known and is of independent interest.

قيم البحث

اقرأ أيضاً

Semi-Markov processes are a generalization of Markov processes since the exponential distribution of time intervals is replaced with an arbitrary distribution. This paper provides an integro-differential form of the Kolmogorovs backward equations for a large class of homogeneous semi-Markov processes, having the form of an abstract Volterra integro-differential equation. An equivalent evolutionary (differential) form of the equations is also provided. Fractional equations in the time variable are a particular case of our analysis. Weak limits of semi-Markov processes are also considered and their corresponding integro-differential Kolmogorovs equations are identified.
107 - Sean D Lawley 2019
The time it takes the fastest searcher out of $Ngg1$ searchers to find a target determines the timescale of many physical, chemical, and biological processes. This time is called an extreme first passage time (FPT) and is typically much faster than t he FPT of a single searcher. Extreme FPTs of diffusion have been studied for decades, but little is known for other types of stochastic processes. In this paper, we study the distribution of extreme FPTs of piecewise deterministic Markov processes (PDMPs). PDMPs are a broad class of stochastic processes that evolve deterministically between random events. Using classical extreme value theory, we prove general theorems which yield the distribution and moments of extreme FPTs in the limit of many searchers based on the short time distribution of the FPT of a single searcher. We then apply these theorems to some canonical PDMPs, including run and tumble searchers in one, two, and three space dimensions. We discuss our results in the context of some biological systems and show how our approach accounts for an unphysical property of diffusion which can be problematic for extreme statistics.
We consider a general piecewise deterministic Markov process (PDMP) $X={X_t}_{tgeqslant 0}$ with measure-valued generator $mathcal{A}$, for which the conditional distribution function of the inter-occurrence time is not necessarily absolutely continu ous. A general form of the exponential martingales is presented as $$M^f_t=frac{f(X_t)}{f(X_0)}left[mathrm{Sexp}left(int_{(0,t]}frac{mathrm{d}L(mathcal{A}f)_s}{f(X_{s-})}right)right]^{-1}.$$ Using this exponential martingale as a likelihood ratio process, we define a new probability measure. It is shown that the original process remains a general PDMP under the new probability measure. And we find the new measure-valued generator and its domain.
This paper describes the structure of solutions to Kolmogorovs equations for nonhomogeneous jump Markov processes and applications of these results to control of jump stochastic systems. These equations were studied by Feller (1940), who clarified in 1945 in the errata to that paper that some of its results covered only nonexplosive Markov processes. We present the results for possibly explosive Markov processes. The paper is based on the invited talk presented by the authors at the International Conference dedicated to the 200th anniversary of the birth of P. L.~Chebyshev.
We construct a family of genealogy-valued Markov processes that are induced by a continuous-time Markov population process. We derive exact expressions for the likelihood of a given genealogy conditional on the history of the underlying population pr ocess. These lead to a version of the nonlinear filtering equation, which can be used to design efficient Monte Carlo inference algorithms. Existing full-information approaches for phylodynamic inference are special cases of the theory.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا