ترغب بنشر مسار تعليمي؟ اضغط هنا

Bernsteins inequality for general Markov chains

73   0   0.0 ( 0 )
 نشر من قبل Bai Jiang
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We establish Bernstein inequalities for functions of general (general-state-space, not necessarily reversible) Markov chains. These inequalities achieve sharp variance proxies and recover the classical Bernsteins inequality under independence. The key analysis lies in upper bounding the operator norm of a perturbed Markov transition kernel by the limiting operator norm of a sequence of finite-rank and perturbed Markov transition kernels. For each finite-rank and perturbed Markov kernel, we bound its norm by the sum of two convex functions. One coincides with what delivers the classical Bernsteins inequality, and the other reflects the influence of the Markov dependence. A convex analysis on conjugates of these two functions then derives our Bernstein inequalities.



قيم البحث

اقرأ أيضاً

Markov chain models are used in various fields, such behavioral sciences or econometrics. Although the goodness of fit of the model is usually assessed by large sample approximation, it is desirable to use conditional tests if the sample size is not large. We study Markov bases for performing conditional tests of the toric homogeneous Markov chain model, which is the envelope exponential family for the usual homogeneous Markov chain model. We give a complete description of a Markov basis for the following cases: i) two-state, arbitrary length, ii) arbitrary finite state space and length of three. The general case remains to be a conjecture. We also present a numerical example of conditional tests based on our Markov basis.
We extend Hoeffdings lemma to general-state-space and not necessarily reversible Markov chains. Let ${X_i}_{i ge 1}$ be a stationary Markov chain with invariant measure $pi$ and absolute spectral gap $1-lambda$, where $lambda$ is defined as the opera tor norm of the transition kernel acting on mean zero and square-integrable functions with respect to $pi$. Then, for any bounded functions $f_i: x mapsto [a_i,b_i]$, the sum of $f_i(X_i)$ is sub-Gaussian with variance proxy $frac{1+lambda}{1-lambda} cdot sum_i frac{(b_i-a_i)^2}{4}$. This result differs from the classical Hoeffdings lemma by a multiplicative coefficient of $(1+lambda)/(1-lambda)$, and simplifies to the latter when $lambda = 0$. The counterpart of Hoeffdings inequality for Markov chains immediately follows. Our results assume none of countable state space, reversibility and time-homogeneity of Markov chains and cover time-dependent functions with various ranges. We illustrate the utility of these results by applying them to six problems in statistics and machine learning.
We study the following learning problem with dependent data: Observing a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the next state. For $3 leq k leq O(sqrt{n})$, using techniques from universal com pression, the optimal prediction risk in Kullback-Leibler divergence is shown to be $Theta(frac{k^2}{n}log frac{n}{k^2})$, in contrast to the optimal rate of $Theta(frac{log log n}{n})$ for $k=2$ previously shown in Falahatgar et al., 2016. These rates, slower than the parametric rate of $O(frac{k^2}{n})$, can be attributed to the memory in the data, as the spectral gap of the Markov chain can be arbitrarily small. To quantify the memory effect, we study irreducible reversible chains with a prescribed spectral gap. In addition to characterizing the optimal prediction risk for two states, we show that, as long as the spectral gap is not excessively small, the prediction risk in the Markov model is $O(frac{k^2}{n})$, which coincides with that of an iid model with the same number of parameters.
114 - Tamas Erdelyi 2019
We give a simple, elementary, and at least partially new proof of Arestovs famous extension of Bernsteins inequality in $L_p$ to all $p geq 0$. Our crucial observation is that Boyds approach to prove Mahlers inequality for algebraic polynomials $P_n in {mathcal P}_n^c$ can be extended to all trigonometric polynomials $T_n in {mathcal T}_n^c$.
78 - J. G. Liao , Arthur Berg 2017
This paper proposes a new sharpened version of the Jensens inequality. The proposed new bound is simple and insightful, is broadly applicable by imposing minimum assumptions, and provides fairly accurate result in spite of its simple form. Applicatio ns to the moment generating function, power mean inequalities, and Rao-Blackwell estimation are presented. This presentation can be incorporated in any calculus-based statistical course.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا