No Arabic abstract
We consider a $p$-dimensional time series where the dimension $p$ increases with the sample size $n$. The resulting data matrix $X$ follows a stochastic volatility model: each entry consists of a positive random volatility term multiplied by an independent noise term. The volatility multipliers introduce dependence in each row and across the rows. We study the asymptotic behavior of the eigenvalues and eigenvectors of the sample covariance matrix $XX$ under a regular variation assumption on the noise. In particular, we prove Poisson convergence for the point process of the centered and normalized eigenvalues and derive limit theory for functionals acting on them, such as the trace. We prove related results for stochastic volatility models with additional linear dependence structure and for stochastic volatility models where the time-varying volatility terms are extinguished with high probability when $n$ increases. We provide explicit approximations of the eigenvectors which are of a strikingly simple structure. The main tools for proving these results are large deviation theorems for heavy-tailed time series, advocating a unified approach to the study of the eigenstructure of heavy-tailed random matrices.
We establish a quantitative version of the Tracy--Widom law for the largest eigenvalue of high dimensional sample covariance matrices. To be precise, we show that the fluctuations of the largest eigenvalue of a sample covariance matrix $X^*X$ converge to its Tracy--Widom limit at a rate nearly $N^{-1/3}$, where $X$ is an $M times N$ random matrix whose entries are independent real or complex random variables, assuming that both $M$ and $N$ tend to infinity at a constant rate. This result improves the previous estimate $N^{-2/9}$ obtained by Wang [73]. Our proof relies on a Green function comparison method [27] using iterative cumulant expansions, the local laws for the Green function and asymptotic properties of the correlation kernel of the white Wishart ensemble.
Consider a $p$-dimensional population ${mathbf x} inmathbb{R}^p$ with iid coordinates in the domain of attraction of a stable distribution with index $alphain (0,2)$. Since the variance of ${mathbf x}$ is infinite, the sample covariance matrix ${mathbf S}_n=n^{-1}sum_{i=1}^n {{mathbf x}_i}{mathbf x}_i$ based on a sample ${mathbf x}_1,ldots,{mathbf x}_n$ from the population is not well behaved and it is of interest to use instead the sample correlation matrix ${mathbf R}_n= {operatorname{diag}({mathbf S}_n)}^{-1/2}, {mathbf S}_n {operatorname{diag}({mathbf S}_n)}^{-1/2}$. This paper finds the limiting distributions of the eigenvalues of ${mathbf R}_n$ when both the dimension $p$ and the sample size $n$ grow to infinity such that $p/nto gamma in (0,infty)$. The family of limiting distributions ${H_{alpha,gamma}}$ is new and depends on the two parameters $alpha$ and $gamma$. The moments of $H_{alpha,gamma}$ are fully identified as sum of two contributions: the first from the classical Marv{c}enko-Pastur law and a second due to heavy tails. Moreover, the family ${H_{alpha,gamma}}$ has continuous extensions at the boundaries $alpha=2$ and $alpha=0$ leading to the Marv{c}enko-Pastur law and a modified Poisson distribution, respectively. Our proofs use the method of moments, the path-shortening algorithm developed in [18] and some novel graph counting combinatorics. As a consequence, the moments of $H_{alpha,gamma}$ are expressed in terms of combinatorial objects such as Stirling numbers of the second kind. A simulation study on these limiting distributions $H_{alpha,gamma}$ is also provided for comparison with the Marv{c}enko-Pastur law.
Sample correlation matrices are employed ubiquitously in statistics. However, quite surprisingly, little is known about their asymptotic spectral properties for high-dimensional data, particularly beyond the case of null models for which the data is assumed independent. Here, considering the popular class of spiked models, we apply random matrix theory to derive asymptotic first-order and distributional results for both the leading eigenvalues and eigenvectors of sample correlation matrices. These results are obtained under high-dimensional settings for which the number of samples n and variables p approach infinity, with p/n tending to a constant. To first order, the spectral properties of sample correlation matrices are seen to coincide with those of sample covariance matrices; however their asymptotic distributions can differ significantly, with fluctuations of both the sample eigenvalues and eigenvectors often being remarkably smaller than those of their sample covariance counterparts.
We propose a Bayesian methodology for estimating spiked covariance matrices with jointly sparse structure in high dimensions. The spiked covariance matrix is reparametrized in terms of the latent factor model, where the loading matrix is equipped with a novel matrix spike-and-slab LASSO prior, which is a continuous shrinkage prior for modeling jointly sparse matrices. We establish the rate-optimal posterior contraction for the covariance matrix with respect to the operator norm as well as that for the principal subspace with respect to the projection operator norm loss. We also study the posterior contraction rate of the principal subspace with respect to the two-to-infinity norm loss, a novel loss function measuring the distance between subspaces that is able to capture element-wise eigenvector perturbations. We show that the posterior contraction rate with respect to the two-to-infinity norm loss is tighter than that with respect to the routinely used projection operator norm loss under certain low-rank and bounded coherence conditions. In addition, a point estimator for the principal subspace is proposed with the rate-optimal risk bound with respect to the projection operator norm loss. These results are based on a collection of concentration and large deviation inequalities for the matrix spike-and-slab LASSO prior. The numerical performance of the proposed methodology is assessed through synthetic examples and the analysis of a real-world face data example.
In this paper we propose two schemes for the recovery of the spectrum of a covariance matrix from the empirical covariance matrix, in the case where the dimension of the matrix is a subunitary multiple of the number of observations. We test, compare and analyze these on simulated data and also on some data coming from the stock market.