ترغب بنشر مسار تعليمي؟ اضغط هنا

The Double-Constant Matrix, Centering Matrix and Equicorrelation Matrix: Theory and Applications

109   0   0.0 ( 0 )
 نشر من قبل Ben O'Neill
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English
 تأليف Ben ONeill




اسأل ChatGPT حول البحث

This paper examines the properties of real symmetric square matrices with a constant value for the main diagonal elements and another constant value for all off-diagonal elements. This matrix form is a simple subclass of circulant matrices, which is a subclass of Toeplitz matrices. It encompasses other useful matrices such as the centering matrix and the equicorrelation matrix, which arise in statistical applications. We examine the general form of this class of matrices and derive its eigendecomposition and other important properties. We use this as a basis to look at the properties of the centering matrix and the equicorrelation matrix, and various statistics that use these matrices.

قيم البحث

اقرأ أيضاً

This paper considers the problem of recovery of a low-rank matrix in the situation when most of its entries are not observed and a fraction of observed entries are corrupted. The observations are noisy realizations of the sum of a low rank matrix, wh ich we wish to recover, with a second matrix having a complementary sparse structure such as element-wise or column-wise sparsity. We analyze a class of estimators obtained by solving a constrained convex optimization problem that combines the nuclear norm and a convex relaxation for a sparse constraint. Our results are obtained for the simultaneous presence of random and deterministic patterns in the sampling scheme. We provide guarantees for recovery of low-rank and sparse components from partial and corrupted observations in the presence of noise and show that the obtained rates of convergence are minimax optimal.
Let $bbZ_{M_1times N}=bbT^{frac{1}{2}}bbX$ where $(bbT^{frac{1}{2}})^2=bbT$ is a positive definite matrix and $bbX$ consists of independent random variables with mean zero and variance one. This paper proposes a unified matrix model $$bold{bbom}=(bbZ bbU_2bbU_2^TbbZ^T)^{-1}bbZbbU_1bbU_1^TbbZ^T,$$ where $bbU_1$ and $bbU_2$ are isometric with dimensions $Ntimes N_1$ and $Ntimes (N-N_2)$ respectively such that $bbU_1^TbbU_1=bbI_{N_1}$, $bbU_2^TbbU_2=bbI_{N-N_2}$ and $bbU_1^TbbU_2=0$. Moreover, $bbU_1$ and $bbU_2$ (random or non-random) are independent of $bbZ_{M_1times N}$ and with probability tending to one, $rank(bbU_1)=N_1$ and $rank(bbU_2)=N-N_2$. We establish the asymptotic Tracy-Widom distribution for its largest eigenvalue under moment assumptions on $bbX$ when $N_1,N_2$ and $M_1$ are comparable. By selecting appropriate matrices $bbU_1$ and $bbU_2$, the asymptotic distributions of the maximum eigenvalues of the matrices used in Canonical Correlation Analysis (CCA) and of F matrices (including centered and non-center
377 - Clifford Lam , Jianqing Fan 2009
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that ar e zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order $(s_nlog p_n/n)^{1/2}$, where $s_n$ is the number of nonzero elements, $p_n$ is the size of the covariance matrix and $n$ is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter $lambda_n$ goes to 0 have been made explicit and compared under different penalties. As a result, for the $L_1$-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: $s_n=O(p_n)$ at most, among $O(p_n^2)$ parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where $s_n$ is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
The consistency and asymptotic normality of the spatial sign covariance matrix with unknown location are shown. Simulations illustrate the different asymptotic behavior when using the mean and the spatial median as location estimator.
Consider estimating the n by p matrix of means of an n by p matrix of independent normally distributed observations with constant variance, where the performance of an estimator is judged using a p by p matrix quadratic error loss function. A matrix version of the James-Stein estimator is proposed, depending on a tuning constant. It is shown to dominate the usual maximum likelihood estimator for some choices of of the tuning constant when n is greater than or equal to 3. This result also extends to other shrinkage estimators and settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا