ترغب بنشر مسار تعليمي؟ اضغط هنا

Effective limit theorems for Markov chains with a spectral gap

70   0   0.0 ( 0 )
 نشر من قبل Benoit Kloeckner
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English
 تأليف Beno^it Kloeckner




اسأل ChatGPT حول البحث

Applying quantitative perturbation theory for linear operators, we prove non-asymptotic limit theorems for Markov chains whose transition kernel has a spectral gap in an arbitrary Banach algebra of functions X . The main results are concentration inequalities and Berry-Esseen bounds, obtained assuming neither reversibility nor `warm start hypothesis: the law of the first term of the chain can be arbitrary. The spectral gap hypothesis is basically a uniform X-ergodicity hypothesis, and when X consist in regular functions this is weaker than uniform ergodicity. We show on a few examples how the flexibility in the choice of function space can be used. The constants are completely explicit and reasonable enough to make the results usable in practice, notably in MCMC methods.v2: Introduction rewritten, Section 3 applying the main results to examples improved (uniformly ergodic chains and Bernoulli convolutions have been notably added) . Main results and their proofs are unchanged.

قيم البحث

اقرأ أيضاً

We consider plain vanilla European options written on an underlying asset that follows a continuous time semi-Markov multiplicative process. We derive a formula and a renewal type equation for the martingale option price. In the case in which intertr ade times follow the Mittag-Leffler distribution, under appropriate scaling, we prove that these option prices converge to the price of an option written on geometric Brownian motion time-changed with the inverse stable subordinator. For geometric Brownian motion time changed with an inverse subordinator, in the more general case when the subordinators Laplace exponent is a special Bernstein function, we derive a time-fractional generalization of the equation of Black and Scholes.
Our purpose is to prove central limit theorem for countable nonhomogeneous Markov chain under the condition of uniform convergence of transition probability matrices for countable nonhomogeneous Markov chain in Ces`aro sense. Furthermore, we obtain a corresponding moderate deviation theorem for countable nonhomogeneous Markov chain by Gartner-Ellis theorem and exponential equivalent method.
We study the following learning problem with dependent data: Observing a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the next state. For $3 leq k leq O(sqrt{n})$, using techniques from universal com pression, the optimal prediction risk in Kullback-Leibler divergence is shown to be $Theta(frac{k^2}{n}log frac{n}{k^2})$, in contrast to the optimal rate of $Theta(frac{log log n}{n})$ for $k=2$ previously shown in Falahatgar et al., 2016. These rates, slower than the parametric rate of $O(frac{k^2}{n})$, can be attributed to the memory in the data, as the spectral gap of the Markov chain can be arbitrarily small. To quantify the memory effect, we study irreducible reversible chains with a prescribed spectral gap. In addition to characterizing the optimal prediction risk for two states, we show that, as long as the spectral gap is not excessively small, the prediction risk in the Markov model is $O(frac{k^2}{n})$, which coincides with that of an iid model with the same number of parameters.
This paper investigates tail asymptotics of stationary distributions and quasi-stationary distributions of continuous-time Markov chains on a subset of the non-negative integers. A new identity for stationary measures is established. In particular, f or continuous-time Markov chains with asymptotic power-law transition rates, tail asymptotics for stationary distributions are classified into three types by three easily computable parameters: (i) Conley-Maxwell-Poisson distributions (light-tailed), (ii) exponential-tailed distributions, and (iii) heavy-tailed distributions. Similar results are derived for quasi-stationary distributions. The approach to establish tail asymptotics is different from the classical semimartingale approach. We apply our results to biochemical reaction networks (modeled as continuous-time Markov chains), a general single-cell stochastic gene expression model, an extended class of branching processes, and stochastic population processes with bursty reproduction, none of which are birth-death processes.
We consider a critical superprocess ${X;mathbf P_mu}$ with general spatial motion and spatially dependent stable branching mechanism with lowest stable index $gamma_0 > 1$. We first show that, under some conditions, $mathbf P_{mu}(|X_t| eq 0)$ conver ges to $0$ as $tto infty$ and is regularly varying with index $(gamma_0-1)^{-1}$. Then we show that, for a large class of non-negative testing functions $f$, the distribution of ${X_t(f);mathbf P_mu(cdot||X_t| eq 0)}$, after appropriate rescaling, converges weakly to a positive random variable $mathbf z^{(gamma_0-1)}$ with Laplace transform $E[e^{-umathbf z^{(gamma_0-1)}}]=1-(1+u^{-(gamma_0-1)})^{-1/(gamma_0-1)}.$
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا