ترغب بنشر مسار تعليمي؟ اضغط هنا

Limiting behavior of largest entry of random tensor constructed by high-dimensional data

68   0   0.0 ( 0 )
 نشر من قبل Junshan Xie
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Let ${X}_{k}=(x_{k1}, cdots, x_{kp}), k=1,cdots,n$, be a random sample of size $n$ coming from a $p$-dimensional population. For a fixed integer $mgeq 2$, consider a hypercubic random tensor $mathbf{{T}}$ of $m$-th order and rank $n$ with begin{eqnarray*} mathbf{{T}}= sum_{k=1}^{n}underbrace{{X}_{k}otimescdotsotimes {X}_{k}}_{m~multiple}=Big(sum_{k=1}^{n} x_{ki_{1}}x_{ki_{2}}cdots x_{ki_{m}}Big)_{1leq i_{1},cdots, i_{m}leq p}. end{eqnarray*} Let $W_n$ be the largest off-diagonal entry of $mathbf{{T}}$. We derive the asymptotic distribution of $W_n$ under a suitable normalization for two cases. They are the ultra-high dimension case with $ptoinfty$ and $log p=o(n^{beta})$ and the high-dimension case with $pto infty$ and $p=O(n^{alpha})$ where $alpha,beta>0$. The normalizing constant of $W_n$ depends on $m$ and the limiting distribution of $W_n$ is a Gumbel-type distribution involved with parameter $m$.



قيم البحث

اقرأ أيضاً

127 - Xiao Fang , Yuta Koike 2020
We obtain explicit error bounds for the $d$-dimensional normal approximation on hyperrectangles for a random vector that has a Stein kernel, or admits an exchangeable pair coupling, or is a non-linear statistic of independent random variables or a su m of $n$ locally dependent random vectors. We assume the approximating normal distribution has a non-singular covariance matrix. The error bounds vanish even when the dimension $d$ is much larger than the sample size $n$. We prove our main results using the approach of Gotze (1991) in Steins method, together with modifications of an estimate of Anderson, Hall and Titterington (1998) and a smoothing inequality of Bhattacharya and Rao (1976). For sums of $n$ independent and identically distributed isotropic random vectors having a log-concave density, we obtain an error bound that is optimal up to a $log n$ factor. We also discuss an application to multiple Wiener-It^{o} integrals.
Consider a $p$-dimensional population ${mathbf x} inmathbb{R}^p$ with iid coordinates in the domain of attraction of a stable distribution with index $alphain (0,2)$. Since the variance of ${mathbf x}$ is infinite, the sample covariance matrix ${math bf S}_n=n^{-1}sum_{i=1}^n {{mathbf x}_i}{mathbf x}_i$ based on a sample ${mathbf x}_1,ldots,{mathbf x}_n$ from the population is not well behaved and it is of interest to use instead the sample correlation matrix ${mathbf R}_n= {operatorname{diag}({mathbf S}_n)}^{-1/2}, {mathbf S}_n {operatorname{diag}({mathbf S}_n)}^{-1/2}$. This paper finds the limiting distributions of the eigenvalues of ${mathbf R}_n$ when both the dimension $p$ and the sample size $n$ grow to infinity such that $p/nto gamma in (0,infty)$. The family of limiting distributions ${H_{alpha,gamma}}$ is new and depends on the two parameters $alpha$ and $gamma$. The moments of $H_{alpha,gamma}$ are fully identified as sum of two contributions: the first from the classical Marv{c}enko-Pastur law and a second due to heavy tails. Moreover, the family ${H_{alpha,gamma}}$ has continuous extensions at the boundaries $alpha=2$ and $alpha=0$ leading to the Marv{c}enko-Pastur law and a modified Poisson distribution, respectively. Our proofs use the method of moments, the path-shortening algorithm developed in [18] and some novel graph counting combinatorics. As a consequence, the moments of $H_{alpha,gamma}$ are expressed in terms of combinatorial objects such as Stirling numbers of the second kind. A simulation study on these limiting distributions $H_{alpha,gamma}$ is also provided for comparison with the Marv{c}enko-Pastur law.
186 - Kai Wang , Yanling Zhu 2018
We mainly study the M-estimation method for the high-dimensional linear regression model, and discuss the properties of M-estimator when the penalty term is the local linear approximation. In fact, M-estimation method is a framework, which covers the methods of the least absolute deviation, the quantile regression, least squares regression and Huber regression. We show that the proposed estimator possesses the good properties by applying certain assumptions. In the part of numerical simulation, we select the appropriate algorithm to show the good robustness of this method
77 - Fan Yang 2021
Consider two high-dimensional random vectors $widetilde{mathbf x}inmathbb R^p$ and $widetilde{mathbf y}inmathbb R^q$ with finite rank correlations. More precisely, suppose that $widetilde{mathbf x}=mathbf x+Amathbf z$ and $widetilde{mathbf y}=mathbf y+Bmathbf z$, for independent random vectors $mathbf xinmathbb R^p$, $mathbf yinmathbb R^q$ and $mathbf zinmathbb R^r$ with iid entries of mean 0 and variance 1, and two deterministic matrices $Ainmathbb R^{ptimes r}$ and $Binmathbb R^{qtimes r}$ . With $n$ iid observations of $(widetilde{mathbf x},widetilde{mathbf y})$, we study the sample canonical correlations between them. In this paper, we focus on the high-dimensional setting with a rank-$r$ correlation. Let $t_1gecdotsge t_r$ be the squares of the population canonical correlation coefficients (CCC) between $widetilde{mathbf x}$ and $widetilde{mathbf y}$, and $widetildelambda_1gecdotsgewidetildelambda_r$ be the squares of the largest $r$ sample CCC. Under certain moment assumptions on the entries of $mathbf x$, $mathbf y$ and $mathbf z$, we show that there exists a threshold $t_cin(0, 1)$ such that if $t_i>t_c$, then $sqrt{n}(widetildelambda_i-theta_i)$ converges in law to a centered normal distribution, where $theta_i>lambda_+$ is a fixed outlier location determined by $t_i$. Our results extend the ones in [4] for Gaussian vectors. Moreover, we find that the variance of the limiting distribution of $sqrt{n}(widetildelambda_i-theta_i)$ also depends on the fourth cumulants of the entries of $mathbf x$, $mathbf y$ and $mathbf z$, a phenomenon that cannot be observed in the Gaussian case.
We establish a quantitative version of the Tracy--Widom law for the largest eigenvalue of high dimensional sample covariance matrices. To be precise, we show that the fluctuations of the largest eigenvalue of a sample covariance matrix $X^*X$ converg e to its Tracy--Widom limit at a rate nearly $N^{-1/3}$, where $X$ is an $M times N$ random matrix whose entries are independent real or complex random variables, assuming that both $M$ and $N$ tend to infinity at a constant rate. This result improves the previous estimate $N^{-2/9}$ obtained by Wang [73]. Our proof relies on a Green function comparison method [27] using iterative cumulant expansions, the local laws for the Green function and asymptotic properties of the correlation kernel of the white Wishart ensemble.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا