ترغب بنشر مسار تعليمي؟ اضغط هنا

Large Deviations Application to Billingsleys Example

135   0   0.0 ( 0 )
 نشر من قبل R. Liptser
 تاريخ النشر 2009
  مجال البحث
والبحث باللغة English
 تأليف R. Liptser




اسأل ChatGPT حول البحث

We consider a classical model related to an empirical distribution function $ F_n(t)=frac{1}{n}sum_{k=1}^nI_{{xi_kle t}}$ of $(xi_k)_{ige 1}$ -- i.i.d. sequence of random variables, supported on the interval $[0,1]$, with continuous distribution function $F(t)=mathsf{P}(xi_1le t)$. Applying ``Stopping Time Techniques, we give a proof of Kolmogorovs exponential bound $$ mathsf{P}big(sup_{tin[0,1]}|F_n(t)-F(t)|ge varepsilonbig)le text{const.}e^{-ndelta_varepsilon} $$ conjectured by Kolmogorov in 1943. Using this bound we establish a best possible logarithmic asymptotic of $$ mathsf{P}big(sup_{tin[0,1]}n^alpha|F_n(t)-F(t)|ge varepsilonbig) $$ with rate $ frac{1}{n^{1-2alpha}} $ slower than $frac{1}{n}$ for any $alphainbig(0,{1/2}big)$.



قيم البحث

اقرأ أيضاً

We prove that moderate deviations for empirical measures for countable nonhomogeneous Markov chains hold under the assumption of uniform convergence of transition probability matrices for countable nonhomogeneous Markov chains in Ces`aro sense.
We study the rate of convergence of the Mallows distance between the empirical distribution of a sample and the underlying population. The surprising feature of our results is that the convergence rate is slower in the discrete case than in the absol utely continuous setting. We show how the hazard function plays a significant role in these calculations. As an application, we recall that the quantity studied provides an upper bound on the distance between the bootstrap distribution of a sample mean and its true sampling distribution. Moreover, the convenient properties of the Mallows metric yield a straightforward lower bound, and therefore a relatively precise description of the asymptotic performance of the bootstrap in this problem.
For $1 le p < infty$, the Frechet $p$-mean of a probability distribution $mu$ on a metric space $(X,d)$ is the set $F_p(mu) := {arg,min}_{xin X}int_{X}d^p(x,y), dmu(y)$, which is taken to be empty if no minimizer exists. Given a sequence $(Y_i)_{i in mathbb{N}}$ of independent, identically distributed random samples from some probability measure $mu$ on $X$, the Frechet $p$-means of the empirical measures, $F_p(frac{1}{n}sum_{i=1}^{n}delta_{Y_i})$ form a sequence of random closed subsets of $X$. We investigate the senses in which this sequence of random closed sets and related objects converge almost surely as $n to infty$.
135 - Christian Houdre , Hua Xu 2007
We derive concentration inequalities for functions of the empirical measure of large random matrices with infinitely divisible entries and, in particular, stable ones. We also give concentration results for some other functionals of these random matr ices, such as the largest eigenvalue or the largest singular value.
84 - Xiao Fang , Yuta Koike 2020
We prove the large-dimensional Gaussian approximation of a sum of $n$ independent random vectors in $mathbb{R}^d$ together with fourth-moment error bounds on convex sets and Euclidean balls. We show that compared with classical third-moment bounds, o ur bounds have near-optimal dependence on $n$ and can achieve improved dependence on the dimension $d$. For centered balls, we obtain an additional error bound that has a sub-optimal dependence on $n$, but recovers the known result of the validity of the Gaussian approximation if and only if $d=o(n)$. We discuss an application to the bootstrap. We prove our main results using Steins method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا