Do you want to publish a course? Click here

Estimation of Low-Rank Covariance Function

152   0   0.0 ( 0 )
 Added by Karim Lounici
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

We consider the problem of estimating a low rank covariance function $K(t,u)$ of a Gaussian process $S(t), tin [0,1]$ based on $n$ i.i.d. copies of $S$ observed in a white noise. We suggest a new estimation procedure adapting simultaneously to the low rank structure and the smoothness of the covariance function. The new procedure is based on nuclear norm penalization and exhibits superior performances as compared to the sample covariance function by a polynomial factor in the sample size $n$. Other results include a minimax lower bound for estimation of low-rank covariance functions showing that our procedure is optimal as well as a scheme to estimate the unknown noise variance of the Gaussian process.



rate research

Read More

We study a panel data model with general heterogeneous effects where slopes are allowed to vary across both individuals and over time. The key dimension reduction assumption we employ is that the heterogeneous slopes can be expressed as having a factor structure so that the high-dimensional slope matrix is low-rank and can thus be estimated using low-rank regularized regression. We provide a simple multi-step estimation procedure for the heterogeneous effects. The procedure makes use of sample-splitting and orthogonalization to accommodate inference following the use of penalized low-rank estimation. We formally verify that the resulting estimator is asymptotically normal allowing simple construction of inferential statements for {the individual-time-specific effects and for cross-sectional averages of these effects}. We illustrate the proposed method in simulation experiments and by estimating the effect of the minimum wage on employment.
We consider the problem of estimating the covariance matrix of a random signal observed through unknown translations (modeled by cyclic shifts) and corrupted by noise. Solving this problem allows to discover low-rank structures masked by the existence of translations (which act as nuisance parameters), with direct application to Principal Components Analysis (PCA). We assume that the underlying signal is of length $L$ and follows a standard factor model with mean zero and $r$ normally-distributed factors. To recover the covariance matrix in this case, we propose to employ the second- and fourth-order shift-invariant moments of the signal known as the $textit{power spectrum}$ and the $textit{trispectrum}$. We prove that they are sufficient for recovering the covariance matrix (under a certain technical condition) when $r<sqrt{L}$. Correspondingly, we provide a polynomial-time procedure for estimating the covariance matrix from many (translated and noisy) observations, where no explicit knowledge of $r$ is required, and prove the procedures statistical consistency. While our results establish that covariance estimation is possible from the power spectrum and the trispectrum for low-rank covariance matrices, we prove that this is not the case for full-rank covariance matrices. We conduct numerical experiments that corroborate our theoretical findings, and demonstrate the favorable performance of our algorithms in various settings, including in high levels of noise.
We propose and analyze a new estimator of the covariance matrix that admits strong theoretical guarantees under weak assumptions on the underlying distribution, such as existence of moments of only low order. While estimation of covariance matrices corresponding to sub-Gaussian distributions is well-understood, much less in known in the case of heavy-tailed data. As K. Balasubramanian and M. Yuan write, data from real-world experiments oftentimes tend to be corrupted with outliers and/or exhibit heavy tails. In such cases, it is not clear that those covariance matrix estimators .. remain optimal and ..what are the other possible strategies to deal with heavy tailed distributions warrant further studies. We make a step towards answering this question and prove tight deviation inequalities for the proposed estimator that depend only on the parameters controlling the intrinsic dimension associated to the covariance matrix (as opposed to the dimension of the ambient space); in particular, our results are applicable in the case of high-dimensional observations.
Let $X$ be a centered Gaussian random variable in a separable Hilbert space ${mathbb H}$ with covariance operator $Sigma.$ We study a problem of estimation of a smooth functional of $Sigma$ based on a sample $X_1,dots ,X_n$ of $n$ independent observations of $X.$ More specifically, we are interested in functionals of the form $langle f(Sigma), Brangle,$ where $f:{mathbb R}mapsto {mathbb R}$ is a smooth function and $B$ is a nuclear operator in ${mathbb H}.$ We prove concentration and normal approximation bounds for plug-in estimator $langle f(hat Sigma),Brangle,$ $hat Sigma:=n^{-1}sum_{j=1}^n X_jotimes X_j$ being the sample covariance based on $X_1,dots, X_n.$ These bounds show that $langle f(hat Sigma),Brangle$ is an asymptotically normal estimator of its expectation ${mathbb E}_{Sigma} langle f(hat Sigma),Brangle$ (rather than of parameter of interest $langle f(Sigma),Brangle$) with a parametric convergence rate $O(n^{-1/2})$ provided that the effective rank ${bf r}(Sigma):= frac{{bf tr}(Sigma)}{|Sigma|}$ (${rm tr}(Sigma)$ being the trace and $|Sigma|$ being the operator norm of $Sigma$) satisfies the assumption ${bf r}(Sigma)=o(n).$ At the same time, we show that the bias of this estimator is typically as large as $frac{{bf r}(Sigma)}{n}$ (which is larger than $n^{-1/2}$ if ${bf r}(Sigma)geq n^{1/2}$). In the case when ${mathbb H}$ is finite-dimensional space of dimension $d=o(n),$ we develop a method of bias reduction and construct an estimator $langle h(hat Sigma),Brangle$ of $langle f(Sigma),Brangle$ that is asymptotically normal with convergence rate $O(n^{-1/2}).$ Moreover, we study asymptotic properties of the risk of this estimator and prove minimax lower bounds for arbitrary estimators showing the asymptotic efficiency of $langle h(hat Sigma),Brangle$ in a semi-parametric sense.
Fan et al. [$mathit{Annals}$ $mathit{of}$ $mathit{Statistics}$ $textbf{47}$(6) (2019) 3009-3031] proposed a distributed principal component analysis (PCA) algorithm to significantly reduce the communication cost between multiple servers. In this paper, we robustify their distributed algorithm by using robust covariance matrix estimators respectively proposed by Minsker [$mathit{Annals}$ $mathit{of}$ $mathit{Statistics}$ $textbf{46}$(6A) (2018) 2871-2903] and Ke et al. [$mathit{Statistical}$ $mathit{Science}$ $textbf{34}$(3) (2019) 454-471] instead of the sample covariance matrix. We extend the deviation bound of robust covariance estimators with bounded fourth moments to the case of the heavy-tailed distribution under only bounded $2+epsilon$ moments assumption. The theoretical results show that after the shrinkage or truncation treatment for the sample covariance matrix, the statistical error rate of the final estimator produced by the robust algorithm is the same as that of sub-Gaussian tails, when $epsilon geq 2$ and the sampling distribution is symmetric innovation. While $2 > epsilon >0$, the rate with respect to the sample size of each server is slower than that of the bounded fourth moment assumption. Extensive numerical results support the theoretical analysis, and indicate that the algorithm performs better than the original distributed algorithm and is robust to heavy-tailed data and outliers.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا