ترغب بنشر مسار تعليمي؟ اضغط هنا

In quantum optics, the quantum state of a light beam is represented through the Wigner function, a density on $mathbb R^2$ which may take negative values but must respect intrinsic positivity constraints imposed by quantum physics. In the framework o f noisy quantum homodyne tomography with efficiency parameter $1/2 < eta leq 1$, we study the theoretical performance of a kernel estimator of the Wigner function. We prove that it is minimax efficient, up to a logarithmic factor in the sample size, for the $mathbb L_infty$-risk over a class of infinitely differentiable. We compute also the lower bound for the $mathbb L_2$-risk. We construct adaptive estimator, i.e. which does not depend on the smoothness parameters, and prove that it attains the minimax rates for the corresponding smoothness class functions. Finite sample behaviour of our adaptive procedure are explored through numerical experiments.
Let $X,X_1,dots, X_n$ be i.i.d. Gaussian random variables in a separable Hilbert space ${mathbb H}$ with zero mean and covariance operator $Sigma={mathbb E}(Xotimes X),$ and let $hat Sigma:=n^{-1}sum_{j=1}^n (X_jotimes X_j)$ be the sample (empirical) covariance operator based on $(X_1,dots, X_n).$ Denote by $P_r$ the spectral projector of $Sigma$ corresponding to its $r$-th eigenvalue $mu_r$ and by $hat P_r$ the empirical counterpart of $P_r.$ The main goal of the paper is to obtain tight bounds on $$ sup_{xin {mathbb R}} left|{mathbb P}left{frac{|hat P_r-P_r|_2^2-{mathbb E}|hat P_r-P_r|_2^2}{{rm Var}^{1/2}(|hat P_r-P_r|_2^2)}leq xright}-Phi(x)right|, $$ where $|cdot|_2$ denotes the Hilbert--Schmidt norm and $Phi$ is the standard normal distribution function. Such accuracy of normal approximation of the distribution of squared Hilbert--Schmidt error is characterized in terms of so called effective rank of $Sigma$ defined as ${bf r}(Sigma)=frac{{rm tr}(Sigma)}{|Sigma|_{infty}},$ where ${rm tr}(Sigma)$ is the trace of $Sigma$ and $|Sigma|_{infty}$ is its operator norm, as well as another parameter characterizing the size of ${rm Var}(|hat P_r-P_r|_2^2).$ Other results include non-asymptotic bounds and asymptotic representations for the mean squared Hilbert--Schmidt norm error ${mathbb E}|hat P_r-P_r|_2^2$ and the variance ${rm Var}(|hat P_r-P_r|_2^2),$ and concentration inequalities for $|hat P_r-P_r|_2^2$ around its expectation.
We consider the problem of estimating a low rank covariance function $K(t,u)$ of a Gaussian process $S(t), tin [0,1]$ based on $n$ i.i.d. copies of $S$ observed in a white noise. We suggest a new estimation procedure adapting simultaneously to the lo w rank structure and the smoothness of the covariance function. The new procedure is based on nuclear norm penalization and exhibits superior performances as compared to the sample covariance function by a polynomial factor in the sample size $n$. Other results include a minimax lower bound for estimation of low-rank covariance functions showing that our procedure is optimal as well as a scheme to estimate the unknown noise variance of the Gaussian process.
This paper considers the problem of recovery of a low-rank matrix in the situation when most of its entries are not observed and a fraction of observed entries are corrupted. The observations are noisy realizations of the sum of a low rank matrix, wh ich we wish to recover, with a second matrix having a complementary sparse structure such as element-wise or column-wise sparsity. We analyze a class of estimators obtained by solving a constrained convex optimization problem that combines the nuclear norm and a convex relaxation for a sparse constraint. Our results are obtained for the simultaneous presence of random and deterministic patterns in the sampling scheme. We provide guarantees for recovery of low-rank and sparse components from partial and corrupted observations in the presence of noise and show that the obtained rates of convergence are minimax optimal.
160 - Karim Lounici 2008
We propose a generalized version of the Dantzig selector. We show that it satisfies sparsity oracle inequalities in prediction and estimation. We consider then the particular case of high-dimensional linear regression model selection with the Huber l oss function. In this case we derive the sup-norm convergence rate and the sign concentration property of the Dantzig estimators under a mutual coherence assumption on the dictionary.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا