ترغب بنشر مسار تعليمي؟ اضغط هنا

On the eigenvalues of the spatial sign covariance matrix in more than two dimensions

103   0   0.0 ( 0 )
 نشر من قبل Daniel Vogel
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We gather several results on the eigenvalues of the spatial sign covariance matrix of an elliptical distribution. It is shown that the eigenvalues are a one-to-one function of the eigenvalues of the shape matrix and that they are closer together than the latter. We further provide a one-dimensional integral representation of the eigenvalues, which facilitates their numerical computation.

قيم البحث

اقرأ أيضاً

The consistency and asymptotic normality of the spatial sign covariance matrix with unknown location are shown. Simulations illustrate the different asymptotic behavior when using the mean and the spatial median as location estimator.
In an extension of Kendalls $tau$, Bergsma and Dassios (2014) introduced a covariance measure $tau^*$ for two ordinal random variables that vanishes if and only if the two variables are independent. For a sample of size $n$, a direct computation of $ t^*$, the empirical version of $tau^*$, requires $O(n^4)$ operations. We derive an algorithm that computes the statistic using only $O(n^2log(n))$ operations.
We propose a Bayesian methodology for estimating spiked covariance matrices with jointly sparse structure in high dimensions. The spiked covariance matrix is reparametrized in terms of the latent factor model, where the loading matrix is equipped wit h a novel matrix spike-and-slab LASSO prior, which is a continuous shrinkage prior for modeling jointly sparse matrices. We establish the rate-optimal posterior contraction for the covariance matrix with respect to the operator norm as well as that for the principal subspace with respect to the projection operator norm loss. We also study the posterior contraction rate of the principal subspace with respect to the two-to-infinity norm loss, a novel loss function measuring the distance between subspaces that is able to capture element-wise eigenvector perturbations. We show that the posterior contraction rate with respect to the two-to-infinity norm loss is tighter than that with respect to the routinely used projection operator norm loss under certain low-rank and bounded coherence conditions. In addition, a point estimator for the principal subspace is proposed with the rate-optimal risk bound with respect to the projection operator norm loss. These results are based on a collection of concentration and large deviation inequalities for the matrix spike-and-slab LASSO prior. The numerical performance of the proposed methodology is assessed through synthetic examples and the analysis of a real-world face data example.
Consider a $Ntimes n$ random matrix $Y_n=(Y_{ij}^{n})$ where the entries are given by $Y_{ij}^{n}=frac{sigma(i/N,j/n)}{sqrt{n}} X_{ij}^{n}$, the $X_{ij}^{n}$ being centered i.i.d. and $sigma:[0,1]^2 to (0,infty)$ being a continuous function called a variance profile. Consider now a deterministic $Ntimes n$ matrix $Lambda_n=(Lambda_{ij}^{n})$ whose non diagonal elements are zero. Denote by $Sigma_n$ the non-centered matrix $Y_n + Lambda_n$. Then under the assumption that $lim_{nto infty} frac Nn =c>0$ and $$ frac{1}{N} sum_{i=1}^{N} delta_{(frac{i}{N}, (Lambda_{ii}^n)^2)} xrightarrow[nto infty]{} H(dx,dlambda), $$ where $H$ is a probability measure, it is proven that the empirical distribution of the eigenvalues of $ Sigma_n Sigma_n^T$ converges almost surely in distribution to a non random probability measure. This measure is characterized in terms of its Stieltjes transform, which is obtained with the help of an auxiliary system of equations. This kind of results is of interest in the field of wireless communication.
Portfolio managers faced with limited sample sizes must use factor models to estimate the covariance matrix of a high-dimensional returns vector. For the simplest one-factor market model, success rests on the quality of the estimated leading eigenvec tor beta. When only the returns themselves are observed, the practitioner has available the PCA estimate equal to the leading eigenvector of the sample covariance matrix. This estimator performs poorly in various ways. To address this problem in the high-dimension, limited sample size asymptotic regime and in the context of estimating the minimum variance portfolio, Goldberg, Papanicolau, and Shkolnik developed a shrinkage method (the GPS estimator) that improves the PCA estimator of beta by shrinking it toward a constant target unit vector. In this paper we continue their work to develop a more general framework of shrinkage targets that allows the practitioner to make use of further information to improve the estimator. Examples include sector separation of stock betas, and recent information from prior estimates. We prove some precise statements and illustrate the resulting improvements over the GPS estimator with some numerical experiments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا