ترغب بنشر مسار تعليمي؟ اضغط هنا

Let $X,X_1,dots, X_n$ be i.i.d. Gaussian random variables in a separable Hilbert space ${mathbb H}$ with zero mean and covariance operator $Sigma={mathbb E}(Xotimes X),$ and let $hat Sigma:=n^{-1}sum_{j=1}^n (X_jotimes X_j)$ be the sample (empirical) covariance operator based on $(X_1,dots, X_n).$ Denote by $P_r$ the spectral projector of $Sigma$ corresponding to its $r$-th eigenvalue $mu_r$ and by $hat P_r$ the empirical counterpart of $P_r.$ The main goal of the paper is to obtain tight bounds on $$ sup_{xin {mathbb R}} left|{mathbb P}left{frac{|hat P_r-P_r|_2^2-{mathbb E}|hat P_r-P_r|_2^2}{{rm Var}^{1/2}(|hat P_r-P_r|_2^2)}leq xright}-Phi(x)right|, $$ where $|cdot|_2$ denotes the Hilbert--Schmidt norm and $Phi$ is the standard normal distribution function. Such accuracy of normal approximation of the distribution of squared Hilbert--Schmidt error is characterized in terms of so called effective rank of $Sigma$ defined as ${bf r}(Sigma)=frac{{rm tr}(Sigma)}{|Sigma|_{infty}},$ where ${rm tr}(Sigma)$ is the trace of $Sigma$ and $|Sigma|_{infty}$ is its operator norm, as well as another parameter characterizing the size of ${rm Var}(|hat P_r-P_r|_2^2).$ Other results include non-asymptotic bounds and asymptotic representations for the mean squared Hilbert--Schmidt norm error ${mathbb E}|hat P_r-P_r|_2^2$ and the variance ${rm Var}(|hat P_r-P_r|_2^2),$ and concentration inequalities for $|hat P_r-P_r|_2^2$ around its expectation.
We consider the problem of estimating a low rank covariance function $K(t,u)$ of a Gaussian process $S(t), tin [0,1]$ based on $n$ i.i.d. copies of $S$ observed in a white noise. We suggest a new estimation procedure adapting simultaneously to the lo w rank structure and the smoothness of the covariance function. The new procedure is based on nuclear norm penalization and exhibits superior performances as compared to the sample covariance function by a polynomial factor in the sample size $n$. Other results include a minimax lower bound for estimation of low-rank covariance functions showing that our procedure is optimal as well as a scheme to estimate the unknown noise variance of the Gaussian process.
165 - Karim Lounici 2008
We propose a generalized version of the Dantzig selector. We show that it satisfies sparsity oracle inequalities in prediction and estimation. We consider then the particular case of high-dimensional linear regression model selection with the Huber l oss function. In this case we derive the sup-norm convergence rate and the sign concentration property of the Dantzig estimators under a mutual coherence assumption on the dictionary.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا