ترغب بنشر مسار تعليمي؟ اضغط هنا

Concentration inequalities for Markov processes via coupling

179   0   0.0 ( 0 )
 نشر من قبل Chazottes
 تاريخ النشر 2010
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We obtain moment and Gaussian bounds for general Lipschitz functions evaluated along the sample path of a Markov chain. We treat Markov chains on general (possibly unbounded) state spaces via a coupling method. If the first moment of the coupling time exists, then we obtain a variance inequality. If a moment of order 1+epsilon of the coupling time exists, then depending on the behavior of the stationary distribution, we obtain higher moment bounds. This immediately implies polynomial concentration inequalities. In the case that a moment of order 1+epsilon is finite uniformly in the starting point of the coupling, we obtain a Gaussian bound. We illustrate the general results with house of cards processes, in which both uniform and non-uniform behavior of moments of the coupling time can occur.



قيم البحث

اقرأ أيضاً

Information-theory based variational principles have proven effective at providing scalable uncertainty quantification (i.e. robustness) bounds for quantities of interest in the presence of nonparametric model-form uncertainty. In this work, we combi ne such variational formulas with functional inequalities (Poincar{e}, $log$-Sobolev, Liapunov functions) to derive explicit uncertainty quantification bounds for time-averaged observables, comparing a Markov process to a second (not necessarily Markov) process. These bounds are well-behaved in the infinite-time limit and apply to steady-states of both discrete and continuous-time Markov processes.
153 - Tobias Johnson , Erol Pekoz 2021
Generalized gamma distributions arise as limits in many settings involving random graphs, walks, trees, and branching processes. Pekoz, Rollin, and Ross (2016, arXiv:1309.4183 [math.PR]) exploited characterizing distributional fixed point equations t o obtain uniform error bounds for generalized gamma approximations using Steins method. Here we show how monotone couplings arising with these fixed point equations can be used to obtain sharper tail bounds that, in many cases, outperform competing moment-based bounds and the uniform bounds obtainable with Steins method. Applications are given to concentration inequalities for preferential attachment random graphs, branching processes, random walk local time statistics and the size of random subtrees of uniformly random binary rooted plane trees.
Concentration properties of functionals of general Poisson processes are studied. Using a modified $Phi$-Sobolev inequality a recursion scheme for moments is established, which is of independent interest. This is applied to derive moment and concentr ation inequalities for functionals on abstract Poisson spaces. Applications of the general results in stochastic geometry, namely Poisson cylinder models and Poisson random polytopes, are presented as well.
338 - Xinjia Chen 2013
We derive simple concentration inequalities for bounded random vectors, which generalize Hoeffdings inequalities for bounded scalar random variables. As applications, we apply the general results to multinomial and Dirichlet distributions to obtain multivariate concentration inequalities.
Consider $n$ complex random matrices $X_1,ldots,X_n$ of size $dtimes d$ sampled i.i.d. from a distribution with mean $E[X]=mu$. While the concentration of averages of these matrices is well-studied, the concentration of other functions of such matric es is less clear. One function which arises in the context of stochastic iterative algorithms, like Ojas algorithm for Principal Component Analysis, is the normalized matrix product defined as $prodlimits_{i=1}^{n}left(I + frac{X_i}{n}right).$ Concentration properties of this normalized matrix product were recently studied by cite{HW19}. However, their result is suboptimal in terms of the dependence on the dimension of the matrices as well as the number of samples. In this paper, we present a stronger concentration result for such matrix products which is optimal in $n$ and $d$ up to constant factors. Our proof is based on considering a matrix Doob martingale, controlling the quadratic variation of that martingale, and applying the Matrix Freedman inequality of Tropp cite{TroppIntro15}.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا