ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Estimation of Linear Functionals of Principal Components

129   0   0.0 ( 0 )
 نشر من قبل Matthias L\\\"offler
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We study principal component analysis (PCA) for mean zero i.i.d. Gaussian observations $X_1,dots, X_n$ in a separable Hilbert space $mathbb{H}$ with unknown covariance operator $Sigma.$ The complexity of the problem is characterized by its effective rank ${bf r}(Sigma):= frac{{rm tr}(Sigma)}{|Sigma|},$ where ${rm tr}(Sigma)$ denotes the trace of $Sigma$ and $|Sigma|$ denotes its operator norm. We develop a method of bias reduction in the problem of estimation of linear functionals of eigenvectors of $Sigma.$ Under the assumption that ${bf r}(Sigma)=o(n),$ we establish the asymptotic normality and asymptotic properties of the risk of the resulting estimators and prove matching minimax lower bounds, showing their semi-parametric optimality.



قيم البحث

اقرأ أيضاً

Principal component analysis is an important pattern recognition and dimensionality reduction tool in many applications. Principal components are computed as eigenvectors of a maximum likelihood covariance $widehat{Sigma}$ that approximates a populat ion covariance $Sigma$, and these eigenvectors are often used to extract structural information about the variables (or attributes) of the studied population. Since PCA is based on the eigendecomposition of the proxy covariance $widehat{Sigma}$ rather than the ground-truth $Sigma$, it is important to understand the approximation error in each individual eigenvector as a function of the number of available samples. The recent results of Kolchinskii and Lounici yield such bounds. In the present paper we sharpen these bounds and show that eigenvectors can often be reconstructed to a required accuracy from a sample of strictly smaller size order.
Let $X$ be a centered Gaussian random variable in a separable Hilbert space ${mathbb H}$ with covariance operator $Sigma.$ We study a problem of estimation of a smooth functional of $Sigma$ based on a sample $X_1,dots ,X_n$ of $n$ independent observa tions of $X.$ More specifically, we are interested in functionals of the form $langle f(Sigma), Brangle,$ where $f:{mathbb R}mapsto {mathbb R}$ is a smooth function and $B$ is a nuclear operator in ${mathbb H}.$ We prove concentration and normal approximation bounds for plug-in estimator $langle f(hat Sigma),Brangle,$ $hat Sigma:=n^{-1}sum_{j=1}^n X_jotimes X_j$ being the sample covariance based on $X_1,dots, X_n.$ These bounds show that $langle f(hat Sigma),Brangle$ is an asymptotically normal estimator of its expectation ${mathbb E}_{Sigma} langle f(hat Sigma),Brangle$ (rather than of parameter of interest $langle f(Sigma),Brangle$) with a parametric convergence rate $O(n^{-1/2})$ provided that the effective rank ${bf r}(Sigma):= frac{{bf tr}(Sigma)}{|Sigma|}$ (${rm tr}(Sigma)$ being the trace and $|Sigma|$ being the operator norm of $Sigma$) satisfies the assumption ${bf r}(Sigma)=o(n).$ At the same time, we show that the bias of this estimator is typically as large as $frac{{bf r}(Sigma)}{n}$ (which is larger than $n^{-1/2}$ if ${bf r}(Sigma)geq n^{1/2}$). In the case when ${mathbb H}$ is finite-dimensional space of dimension $d=o(n),$ we develop a method of bias reduction and construct an estimator $langle h(hat Sigma),Brangle$ of $langle f(Sigma),Brangle$ that is asymptotically normal with convergence rate $O(n^{-1/2}).$ Moreover, we study asymptotic properties of the risk of this estimator and prove minimax lower bounds for arbitrary estimators showing the asymptotic efficiency of $langle h(hat Sigma),Brangle$ in a semi-parametric sense.
We study a problem of estimation of smooth functionals of parameter $theta $ of Gaussian shift model $$ X=theta +xi, theta in E, $$ where $E$ is a separable Banach space and $X$ is an observation of unknown vector $theta$ in Gaussian noise $xi$ with zero mean and known covariance operator $Sigma.$ In particular, we develop estimators $T(X)$ of $f(theta)$ for functionals $f:Emapsto {mathbb R}$ of Holder smoothness $s>0$ such that $$ sup_{|theta|leq 1} {mathbb E}_{theta}(T(X)-f(theta))^2 lesssim Bigl(|Sigma| vee ({mathbb E}|xi|^2)^sBigr)wedge 1, $$ where $|Sigma|$ is the operator norm of $Sigma,$ and show that this mean squared error rate is minimax optimal at least in the case of standard Gaussian shift model ($E={mathbb R}^d$ equipped with the canonical Euclidean norm, $xi =sigma Z,$ $Zsim {mathcal N}(0;I_d)$). Moreover, we determine a sharp threshold on the smoothness $s$ of functional $f$ such that, for all $s$ above the threshold, $f(theta)$ can be estimated efficiently with a mean squared error rate of the order $|Sigma|$ in a small noise setting (that is, when ${mathbb E}|xi|^2$ is small). The construction of efficient estimators is crucially based on a bootstrap chain method of bias reduction. The results could be applied to a variety of special high-dimensional and infinite-dimensional Gaussian models (for vector, matrix and functional data).
In this paper, we study the asymptotic behavior of the extreme eigenvalues and eigenvectors of the high dimensional spiked sample covariance matrices, in the supercritical case when a reliable detection of spikes is possible. Especially, we derive th e joint distribution of the extreme eigenvalues and the generalized components of the associated eigenvectors, i.e., the projections of the eigenvectors onto arbitrary given direction, assuming that the dimension and sample size are comparably large. In general, the joint distribution is given in terms of linear combinations of finitely many Gaussian and Chi-square variables, with parameters depending on the projection direction and the spikes. Our assumption on the spikes is fully general. First, the strengths of spikes are only required to be slightly above the critical threshold and no upper bound on the strengths is needed. Second, multiple spikes, i.e., spikes with the same strength, are allowed. Third, no structural assumption is imposed on the spikes. Thanks to the general setting, we can then apply the results to various high dimensional statistical hypothesis testing problems involving both the eigenvalues and eigenvectors. Specifically, we propose accurate and powerful statistics to conduct hypothesis testing on the principal components. These statistics are data-dependent and adaptive to the underlying true spikes. Numerical simulations also confirm the accuracy and powerfulness of our proposed statistics and illustrate significantly better performance compared to the existing methods in the literature. Especially, our methods are accurate and powerful even when either the spikes are small or the dimension is large.
Two existing approaches to functional principal components analysis (FPCA) are due to Rice and Silverman (1991) and Silverman (1996), both based on maximizing variance but introducing penalization in different ways. In this article we propose an alte rnative approach to FPCA using penalized rank one approximation to the data matrix. Our contributions are four-fold: (1) by considering invariance under scale transformation of the measurements, the new formulation sheds light on how regularization should be performed for FPCA and suggests an efficient power algorithm for computation; (2) it naturally incorporates spline smoothing of discretized functional data; (3) the connection with smoothing splines also facilitates construction of cross-validation or generalized cross-validation criteria for smoothing parameter selection that allows efficient computation; (4) different smoothing parameters are permitted for different FPCs. The methodology is illustrated with a real data example and a simulation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا