ترغب بنشر مسار تعليمي؟ اضغط هنا

Recovery of spectrum from estimated covariance matrices and statistical kernels for machine learning and big data

97   0   0.0 ( 0 )
 نشر من قبل Ionel Popescu
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we propose two schemes for the recovery of the spectrum of a covariance matrix from the empirical covariance matrix, in the case where the dimension of the matrix is a subunitary multiple of the number of observations. We test, compare and analyze these on simulated data and also on some data coming from the stock market.



قيم البحث

اقرأ أيضاً

We establish a quantitative version of the Tracy--Widom law for the largest eigenvalue of high dimensional sample covariance matrices. To be precise, we show that the fluctuations of the largest eigenvalue of a sample covariance matrix $X^*X$ converg e to its Tracy--Widom limit at a rate nearly $N^{-1/3}$, where $X$ is an $M times N$ random matrix whose entries are independent real or complex random variables, assuming that both $M$ and $N$ tend to infinity at a constant rate. This result improves the previous estimate $N^{-2/9}$ obtained by Wang [73]. Our proof relies on a Green function comparison method [27] using iterative cumulant expansions, the local laws for the Green function and asymptotic properties of the correlation kernel of the white Wishart ensemble.
We consider a $p$-dimensional time series where the dimension $p$ increases with the sample size $n$. The resulting data matrix $X$ follows a stochastic volatility model: each entry consists of a positive random volatility term multiplied by an indep endent noise term. The volatility multipliers introduce dependence in each row and across the rows. We study the asymptotic behavior of the eigenvalues and eigenvectors of the sample covariance matrix $XX$ under a regular variation assumption on the noise. In particular, we prove Poisson convergence for the point process of the centered and normalized eigenvalues and derive limit theory for functionals acting on them, such as the trace. We prove related results for stochastic volatility models with additional linear dependence structure and for stochastic volatility models where the time-varying volatility terms are extinguished with high probability when $n$ increases. We provide explicit approximations of the eigenvectors which are of a strikingly simple structure. The main tools for proving these results are large deviation theorems for heavy-tailed time series, advocating a unified approach to the study of the eigenstructure of heavy-tailed random matrices.
Consider a $p$-dimensional population ${mathbf x} inmathbb{R}^p$ with iid coordinates in the domain of attraction of a stable distribution with index $alphain (0,2)$. Since the variance of ${mathbf x}$ is infinite, the sample covariance matrix ${math bf S}_n=n^{-1}sum_{i=1}^n {{mathbf x}_i}{mathbf x}_i$ based on a sample ${mathbf x}_1,ldots,{mathbf x}_n$ from the population is not well behaved and it is of interest to use instead the sample correlation matrix ${mathbf R}_n= {operatorname{diag}({mathbf S}_n)}^{-1/2}, {mathbf S}_n {operatorname{diag}({mathbf S}_n)}^{-1/2}$. This paper finds the limiting distributions of the eigenvalues of ${mathbf R}_n$ when both the dimension $p$ and the sample size $n$ grow to infinity such that $p/nto gamma in (0,infty)$. The family of limiting distributions ${H_{alpha,gamma}}$ is new and depends on the two parameters $alpha$ and $gamma$. The moments of $H_{alpha,gamma}$ are fully identified as sum of two contributions: the first from the classical Marv{c}enko-Pastur law and a second due to heavy tails. Moreover, the family ${H_{alpha,gamma}}$ has continuous extensions at the boundaries $alpha=2$ and $alpha=0$ leading to the Marv{c}enko-Pastur law and a modified Poisson distribution, respectively. Our proofs use the method of moments, the path-shortening algorithm developed in [18] and some novel graph counting combinatorics. As a consequence, the moments of $H_{alpha,gamma}$ are expressed in terms of combinatorial objects such as Stirling numbers of the second kind. A simulation study on these limiting distributions $H_{alpha,gamma}$ is also provided for comparison with the Marv{c}enko-Pastur law.
We consider Gaussian measures $mu, tilde{mu}$ on a separable Hilbert space, with fractional-order covariance operators $A^{-2beta}$ resp. $tilde{A}^{-2tilde{beta}}$, and derive necessary and sufficient conditions on $A, tilde{A}$ and $beta, tilde{bet a} > 0$ for I. equivalence of the measures $mu$ and $tilde{mu}$, and II. uniform asymptotic optimality of linear predictions for $mu$ based on the misspecified measure $tilde{mu}$. These results hold, e.g., for Gaussian processes on compact metric spaces. As an important special case, we consider the class of generalized Whittle-Matern Gaussian random fields, where $A$ and $tilde{A}$ are elliptic second-order differential operators, formulated on a bounded Euclidean domain $mathcal{D}subsetmathbb{R}^d$ and augmented with homogeneous Dirichlet boundary conditions. Our outcomes explain why the predictive performances of stationary and non-stationary models in spatial statistics often are comparable, and provide a crucial first step in deriving consistency results for parameter estimation of generalized Whittle-Matern fields.
106 - Nikita Zhivotovskiy 2021
We consider the deviation inequalities for the sums of independent $d$ by $d$ random matrices, as well as rank one random tensors. Our focus is on the non-isotropic case and the bounds that do not depend explicitly on the dimension $d$, but rather on the effective rank. In a rather elementary and unified way, we show the following results: 1) A deviation bound for the sums of independent positive-semi-definite matrices of any rank. This result generalizes the dimension-free bound of Koltchinskii and Lounici [Bernoulli, 23(1): 110-133, 2017] on the sample covariance matrix in the sub-Gaussian case. 2) Dimension-free bounds for the operator norm of the sums of random tensors of rank one formed either by sub-Gaussian or log-concave random vectors. This extends the result of Guedon and Rudelson [Adv. in Math., 208: 798-823, 2007]. 3) A non-isotropic version of the result of Alesker [Geom. Asp. of Funct. Anal., 77: 1--4, 1995] on the concentration of the norm of sub-exponential random vectors. 4) A dimension-free lower tail bound for sums of positive semi-definite matrices with heavy-tailed entries, sharpening the bound of Oliveira [Prob. Th. and Rel. Fields, 166: 1175-1194, 2016]. Our approach is based on the duality formula between entropy and moment generating functions. In contrast to the known proofs of dimension-free bounds, we avoid Talagrands majorizing measure theorem, as well as generic chaining bounds for empirical processes. Some of our tools were pioneered by O. Catoni and co-authors in the context of robust statistical estimation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا