ترغب بنشر مسار تعليمي؟ اضغط هنا

The spatial sign covariance matrix with unknown location

238   0   0.0 ( 0 )
 نشر من قبل Daniel Vogel
 تاريخ النشر 2013
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The consistency and asymptotic normality of the spatial sign covariance matrix with unknown location are shown. Simulations illustrate the different asymptotic behavior when using the mean and the spatial median as location estimator.



قيم البحث

اقرأ أيضاً

In this paper we study covariance estimation with missing data. We consider missing data mechanisms that can be independent of the data, or have a time varying dependency. Additionally, observed variables may have arbitrary (non uniform) and dependen t observation probabilities. For each mechanism, we construct an unbiased estimator and obtain bounds for the expected value of their estimation error in operator norm. Our bounds are equivalent, up to constant and logarithmic factors, to state of the art bounds for complete and uniform missing observations. Furthermore, for the more general non uniform and dependent cases, the proposed bounds are new or improve upon previous results. Our error estimates depend on quantities we call scaled effective rank, which generalize the effective rank to account for missing observations. All the estimators studied in this work have the same asymptotic convergence rate (up to logarithmic factors).
We introduce a new random matrix model called distance covariance matrix in this paper, whose normalized trace is equivalent to the distance covariance. We first derive a deterministic limit for the eigenvalue distribution of the distance covariance matrix when the dimensions of the vectors and the sample size tend to infinity simultaneously. This limit is valid when the vectors are independent or weakly dependent through a finite-rank perturbation. It is also universal and independent of the details of the distributions of the vectors. Furthermore, the top eigenvalues of this distance covariance matrix are shown to obey an exact phase transition when the dependence of the vectors is of finite rank. This finding enables the construction of a new detector for such weak dependence where classical methods based on large sample covariance matrices or sample canonical correlations may fail in the considered high-dimensional framework.
438 - Clifford Lam , Jianqing Fan 2009
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that ar e zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order $(s_nlog p_n/n)^{1/2}$, where $s_n$ is the number of nonzero elements, $p_n$ is the size of the covariance matrix and $n$ is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter $lambda_n$ goes to 0 have been made explicit and compared under different penalties. As a result, for the $L_1$-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: $s_n=O(p_n)$ at most, among $O(p_n^2)$ parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where $s_n$ is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
144 - Chen Wang , Baisuo Jin , Z. D. Bai 2013
The auto-cross covariance matrix is defined as [mathbf{M}_n=frac{1} {2T}sum_{j=1}^Tbigl(mathbf{e}_jmathbf{e}_{j+tau}^*+mathbf{e}_{j+ tau}mathbf{e}_j^*bigr),] where $mathbf{e}_j$s are $n$-dimensional vectors of independent standard complex components with a common mean 0, variance $sigma^2$, and uniformly bounded $2+eta$th moments and $tau$ is the lag. Jin et al. [Ann. Appl. Probab. 24 (2014) 1199-1225] has proved that the LSD of $mathbf{M}_n$ exists uniquely and nonrandomly, and independent of $tau$ for all $tauge 1$. And in addition they gave an analytic expression of the LSD. As a continuation of Jin et al. [Ann. Appl. Probab. 24 (2014) 1199-1225], this paper proved that under the condition of uniformly bounded fourth moments, in any closed interval outside the support of the LSD, with probability 1 there will be no eigenvalues of $mathbf{M}_n$ for all large $n$. As a consequence of the main theorem, the limits of the largest and smallest eigenvalue of $mathbf{M}_n$ are also obtained.
We gather several results on the eigenvalues of the spatial sign covariance matrix of an elliptical distribution. It is shown that the eigenvalues are a one-to-one function of the eigenvalues of the shape matrix and that they are closer together than the latter. We further provide a one-dimensional integral representation of the eigenvalues, which facilitates their numerical computation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا