ﻻ يوجد ملخص باللغة العربية
Consider a random vector $mathbf{y}=mathbf{Sigma}^{1/2}mathbf{x}$, where the $p$ elements of the vector $mathbf{x}$ are i.i.d. real-valued random variables with zero mean and finite fourth moment, and $mathbf{Sigma}^{1/2}$ is a deterministic $ptimes p$ matrix such that the spectral norm of the population correlation matrix $mathbf{R}$ of $mathbf{y}$ is uniformly bounded. In this paper, we find that the log determinant of the sample correlation matrix $hat{mathbf{R}}$ based on a sample of size $n$ from the distribution of $mathbf{y}$ satisfies a CLT (central limit theorem) for $p/nto gammain (0, 1]$ and $pleq n$. Explicit formulas for the asymptotic mean and variance are provided. In case the mean of $mathbf{y}$ is unknown, we show that after recentering by the empirical mean the obtained CLT holds with a shift in the asymptotic mean. This result is of independent interest in both large dimensional random matrix theory and high-dimensional statistical literature of large sample correlation matrices for non-normal data. At last, the obtained findings are applied for testing of uncorrelatedness of $p$ random variables. Surprisingly, in the null case $mathbf{R}=mathbf{I}$, the test statistic becomes completely pivotal and the extensive simulations show that the obtained CLT also holds if the moments of order four do not exist at all, which conjectures a promising and robust test statistic for heavy-tailed high-dimensional data.
Nonparametric latent structure models provide flexible inference on distinct, yet related, groups of observations. Each component of a vector of $d ge 2$ random measures models the distribution of a group of exchangeable observations, while their dep
Consider a normal vector $mathbf{z}=(mathbf{x},mathbf{y})$, consisting of two sub-vectors $mathbf{x}$ and $mathbf{y}$ with dimensions $p$ and $q$ respectively. With $n$ independent observations of $mathbf{z}$ at hand, we study the correlation between
Consider a standard white Wishart matrix with parameters $n$ and $p$. Motivated by applications in high-dimensional statistics and signal processing, we perform asymptotic analysis on the maxima and minima of the eigenvalues of all the $m times m$ pr
We study the law of the iterated logarithm (LIL) for the maximum likelihood estimation of the parameters (as a convex optimization problem) in the generalized linear models with independent or weakly dependent ($rho$-mixing, $m$-dependent) responses
Recovering low-rank structures via eigenvector perturbation analysis is a common problem in statistical machine learning, such as in factor analysis, community detection, ranking, matrix completion, among others. While a large variety of bounds are a