ترغب بنشر مسار تعليمي؟ اضغط هنا

Weak convergence of the empirical process and the rescaled empirical distribution function in the Skorokhod product space

165   0   0.0 ( 0 )
 نشر من قبل Daniel Vogel
 تاريخ النشر 2015
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We prove the asymptotic independence of the empirical process $alpha_n = sqrt{n}( F_n - F)$ and the rescaled empirical distribution function $beta_n = n (F_n(tau+frac{cdot}{n})-F_n(tau))$, where $F$ is an arbitrary cdf, differentiable at some point $tau$, and $F_n$ the corresponding empricial cdf. This seems rather counterintuitive, since, for every $n in N$, there is a deterministic correspondence between $alpha_n$ and $beta_n$. Precisely, we show that the pair $(alpha_n,beta_n)$ converges in law to a limit having independent components, namely a time-transformed Brownian bridge and a two-sided Poisson process. Since these processes have jumps, in particular if $F$ itself has jumps, the Skorokhod product space $D(R) times D(R)$ is the adequate choice for modeling this convergence in. We develop a short convergence theory for $D(R) times D(R)$ by establishing the classical principle, devised by Yu. V. Prokhorov, that finite-dimensional convergence and tightness imply weak convergence. Several tightness criteria are given. Finally, the convergence of the pair $(alpha_n,beta_n)$ implies convergence of each of its components, thus, in passing, we provide a thorough proof of these known convergence results in a very general setting. In fact, the condition on $F$ to be differentiable in at least one point is only required for $beta_n$ to converge and can be further weakened.



قيم البحث

اقرأ أيضاً

We study the rate of convergence of the Mallows distance between the empirical distribution of a sample and the underlying population. The surprising feature of our results is that the convergence rate is slower in the discrete case than in the absol utely continuous setting. We show how the hazard function plays a significant role in these calculations. As an application, we recall that the quantity studied provides an upper bound on the distance between the bootstrap distribution of a sample mean and its true sampling distribution. Moreover, the convenient properties of the Mallows metric yield a straightforward lower bound, and therefore a relatively precise description of the asymptotic performance of the bootstrap in this problem.
We consider a sequence of identically independently distributed random samples from an absolutely continuous probability measure in one dimension with unbounded density. We establish a new rate of convergence of the $infty-$Wasserstein distance betwe en the empirical measure of the samples and the true distribution, which extends the previous convergence result by Trilllos and Slepv{c}ev to the case that the true distribution has an unbounded density.
Consider a $Ntimes n$ random matrix $Z_n=(Z^n_{j_1 j_2})$ where the individual entries are a realization of a properly rescaled stationary gaussian random field. The purpose of this article is to study the limiting empirical distribution of the eig envalues of Gram random matrices such as $Z_n Z_n ^*$ and $(Z_n +A_n)(Z_n +A_n)^*$ where $A_n$ is a deterministic matrix with appropriate assumptions in the case where $nto infty$ and $frac Nn to c in (0,infty)$. The proof relies on related results for matrices with independent but not identically distributed entries and substantially differs from related works in the literature (Boutet de Monvel et al., Girko, etc.).
Consider a $Ntimes n$ random matrix $Y_n=(Y_{ij}^{n})$ where the entries are given by $Y_{ij}^{n}=frac{sigma(i/N,j/n)}{sqrt{n}} X_{ij}^{n}$, the $X_{ij}^{n}$ being centered i.i.d. and $sigma:[0,1]^2 to (0,infty)$ being a continuous function called a variance profile. Consider now a deterministic $Ntimes n$ matrix $Lambda_n=(Lambda_{ij}^{n})$ whose non diagonal elements are zero. Denote by $Sigma_n$ the non-centered matrix $Y_n + Lambda_n$. Then under the assumption that $lim_{nto infty} frac Nn =c>0$ and $$ frac{1}{N} sum_{i=1}^{N} delta_{(frac{i}{N}, (Lambda_{ii}^n)^2)} xrightarrow[nto infty]{} H(dx,dlambda), $$ where $H$ is a probability measure, it is proven that the empirical distribution of the eigenvalues of $ Sigma_n Sigma_n^T$ converges almost surely in distribution to a non random probability measure. This measure is characterized in terms of its Stieltjes transform, which is obtained with the help of an auxiliary system of equations. This kind of results is of interest in the field of wireless communication.
Consider the empirical measure, $hat{mathbb{P}}_N$, associated to $N$ i.i.d. samples of a given probability distribution $mathbb{P}$ on the unit interval. For fixed $mathbb{P}$ the Wasserstein distance between $hat{mathbb{P}}_N$ and $mathbb{P}$ is a random variable on the sample space $[0,1]^N$. Our main result is that its normalised quantiles are asymptotically maximised when $mathbb{P}$ is a convex combination between the uniform distribution supported on the two points ${0,1}$ and the uniform distribution on the unit interval $[0,1]$. This allows us to obtain explicit asymptotic confidence regions for the underlying measure $mathbb{P}$. We also suggest extensions to higher dimensions with numerical evidence.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا