Do you want to publish a course? Click here

Limiting distributions for eigenvalues of sample correlation matrices from heavy-tailed populations

150   0   0.0 ( 0 )
 Added by Johannes Heiny
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

Consider a $p$-dimensional population ${mathbf x} inmathbb{R}^p$ with iid coordinates in the domain of attraction of a stable distribution with index $alphain (0,2)$. Since the variance of ${mathbf x}$ is infinite, the sample covariance matrix ${mathbf S}_n=n^{-1}sum_{i=1}^n {{mathbf x}_i}{mathbf x}_i$ based on a sample ${mathbf x}_1,ldots,{mathbf x}_n$ from the population is not well behaved and it is of interest to use instead the sample correlation matrix ${mathbf R}_n= {operatorname{diag}({mathbf S}_n)}^{-1/2}, {mathbf S}_n {operatorname{diag}({mathbf S}_n)}^{-1/2}$. This paper finds the limiting distributions of the eigenvalues of ${mathbf R}_n$ when both the dimension $p$ and the sample size $n$ grow to infinity such that $p/nto gamma in (0,infty)$. The family of limiting distributions ${H_{alpha,gamma}}$ is new and depends on the two parameters $alpha$ and $gamma$. The moments of $H_{alpha,gamma}$ are fully identified as sum of two contributions: the first from the classical Marv{c}enko-Pastur law and a second due to heavy tails. Moreover, the family ${H_{alpha,gamma}}$ has continuous extensions at the boundaries $alpha=2$ and $alpha=0$ leading to the Marv{c}enko-Pastur law and a modified Poisson distribution, respectively. Our proofs use the method of moments, the path-shortening algorithm developed in [18] and some novel graph counting combinatorics. As a consequence, the moments of $H_{alpha,gamma}$ are expressed in terms of combinatorial objects such as Stirling numbers of the second kind. A simulation study on these limiting distributions $H_{alpha,gamma}$ is also provided for comparison with the Marv{c}enko-Pastur law.



rate research

Read More

153 - Antonio Auffinger , Si Tang 2015
We study the statistics of the largest eigenvalues of $p times p$ sample covariance matrices $Sigma_{p,n} = M_{p,n}M_{p,n}^{*}$ when the entries of the $p times n$ matrix $M_{p,n}$ are sparse and have a distribution with tail $t^{-alpha}$, $alpha>0$. On average the number of nonzero entries of $M_{p,n}$ is of order $n^{mu+1}$, $0 leq mu leq 1$. We prove that in the large $n$ limit, the largest eigenvalues are Poissonian if $alpha<2(1+mu^{{-1}})$ and converge to a constant in the case $alpha>2(1+mu^{{-1}})$. We also extend the results of Benaych-Georges and Peche [7] in the Hermitian case, removing restrictions on the number of nonzero entries of the matrix.
We offer a survey of recent results on covariance estimation for heavy-tailed distributions. By unifying ideas scattered in the literature, we propose user-friendly methods that facilitate practical implementation. Specifically, we introduce element-wise and spectrum-wise truncation operators, as well as their $M$-estimator counterparts, to robustify the sample covariance matrix. Different from the classical notion of robustness that is characterized by the breakdown property, we focus on the tail robustness which is evidenced by the connection between nonasymptotic deviation and confidence level. The key observation is that the estimators needs to adapt to the sample size, dimensionality of the data and the noise level to achieve optimal tradeoff between bias and robustness. Furthermore, to facilitate their practical use, we propose data-driven procedures that automatically calibrate the tuning parameters. We demonstrate their applications to a series of structured models in high dimensions, including the bandable and low-rank covariance matrices and sparse precision matrices. Numerical studies lend strong support to the proposed methods.
We consider a $p$-dimensional time series where the dimension $p$ increases with the sample size $n$. The resulting data matrix $X$ follows a stochastic volatility model: each entry consists of a positive random volatility term multiplied by an independent noise term. The volatility multipliers introduce dependence in each row and across the rows. We study the asymptotic behavior of the eigenvalues and eigenvectors of the sample covariance matrix $XX$ under a regular variation assumption on the noise. In particular, we prove Poisson convergence for the point process of the centered and normalized eigenvalues and derive limit theory for functionals acting on them, such as the trace. We prove related results for stochastic volatility models with additional linear dependence structure and for stochastic volatility models where the time-varying volatility terms are extinguished with high probability when $n$ increases. We provide explicit approximations of the eigenvectors which are of a strikingly simple structure. The main tools for proving these results are large deviation theorems for heavy-tailed time series, advocating a unified approach to the study of the eigenstructure of heavy-tailed random matrices.
Marcinkiewicz strong law of large numbers, ${n^{-frac1p}}sum_{k=1}^{n} (d_{k}- d)rightarrow 0 $ almost surely with $pin(1,2)$, are developed for products $d_k=prod_{r=1}^s x_k^{(r)}$, where the $x_k^{(r)} = sum_{l=-infty}^{infty}c_{k-l}^{(r)}xi_l^{(r)}$ are two-sided linear process with coefficients ${c_l^{(r)}}_{lin mathbb{Z}}$ and i.i.d. zero-mean innovations ${xi_l^{(r)}}_{lin mathbb{Z}}$. The decay of the coefficients $c_l^{(r)}$ as $|l|toinfty$, can be slow enough for ${x_k^{(r)}}$ to have long memory while ${d_k}$ can have heavy tails. The long-range dependence and heavy tails for ${d_k}$ are handled simultaneously and a decoupling property shows the convergence rate is dictated by the worst of long-range dependence and heavy tails, but not their combination. The results provide a means to estimate how much (if any) long-range dependence and heavy tails a sequential data set possesses, which is done for real financial data. All of the stocks we considered had some degree of heavy tails. The majority also had long-range dependence. The Marcinkiewicz strong law of large numbers is also extended to the multivariate linear process case.
We establish a quantitative version of the Tracy--Widom law for the largest eigenvalue of high dimensional sample covariance matrices. To be precise, we show that the fluctuations of the largest eigenvalue of a sample covariance matrix $X^*X$ converge to its Tracy--Widom limit at a rate nearly $N^{-1/3}$, where $X$ is an $M times N$ random matrix whose entries are independent real or complex random variables, assuming that both $M$ and $N$ tend to infinity at a constant rate. This result improves the previous estimate $N^{-2/9}$ obtained by Wang [73]. Our proof relies on a Green function comparison method [27] using iterative cumulant expansions, the local laws for the Green function and asymptotic properties of the correlation kernel of the white Wishart ensemble.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا