Do you want to publish a course? Click here

AIC, Cp and estimators of loss for elliptically symmetric distributions

117   0   0.0 ( 0 )
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

In this article, we develop a modern perspective on Akaikes Information Criterion and Mallows Cp for model selection. Despite the diff erences in their respective motivation, they are equivalent in the special case of Gaussian linear regression. In this case they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from loss estimation theory. Our first contribution is to provide an explicit link between loss estimation and model selection through a new oracle inequality. We then show that the form of the unbiased estimator of the quadratic prediction loss under a Gaussian assumption still holds under a more general distributional assumption, the family of spherically symmetric distributions. One of the features of our results is that our criterion does not rely on the speci ficity of the distribution, but only on its spherical symmetry. Also this family of laws o ffers some dependence property between the observations, a case not often studied.



rate research

Read More

We consider the problem of estimating the mean vector $theta$ of a $d$-dimensional spherically symmetric distributed $X$ based on balanced loss functions of the forms: {bf (i)} $omega rho(|de-de_{0}|^{2}) +(1-omega)rho(|de - theta|^{2})$ and {bf (ii)} $ellleft(omega |de - de_{0}|^{2} +(1-omega)|de - theta|^{2}right)$, where $delta_0$ is a target estimator, and where $rho$ and $ell$ are increasing and concave functions. For $dgeq 4$ and the target estimator $delta_0(X)=X$, we provide Baranchik-type estimators that dominate $delta_0(X)=X$ and are minimax. The findings represent extensions of those of Marchand & Strawderman (cite{ms2020}) in two directions: {bf (a)} from scale mixture of normals to the spherical class of distributions with Lebesgue densities and {bf (b)} from completely monotone to concave $rho$ and $ell$.
We investigate two important properties of M-estimator, namely, robustness and tractability, in linear regression setting, when the observations are contaminated by some arbitrary outliers. Specifically, robustness means the statistical property that the estimator should always be close to the underlying true parameters {em regardless of the distribution of the outliers}, and tractability indicates the computational property that the estimator can be computed efficiently, even if the objective function of the M-estimator is {em non-convex}. In this article, by learning the landscape of the empirical risk, we show that under mild conditions, many M-estimators enjoy nice robustness and tractability properties simultaneously, when the percentage of outliers is small. We further extend our analysis to the high-dimensional setting, where the number of parameters is greater than the number of samples, $p gg n$, and prove that when the proportion of outliers is small, the penalized M-estimators with {em $L_1$} penalty will enjoy robustness and tractability simultaneously. Our research provides an analytic approach to see the effects of outliers and tuning parameters on the robustness and tractability for some families of M-estimators. Simulation and case study are presented to illustrate the usefulness of our theoretical results for M-estimators under Welschs exponential squared loss.
In the Gaussian white noise model, we study the estimation of an unknown multidimensional function $f$ in the uniform norm by using kernel methods. The performances of procedures are measured by using the maxiset point of view: we determine the set of functions which are well estimated (at a prescribed rate) by each procedure. So, in this paper, we determine the maxisets associated to kernel estimators and to the Lepski procedure for the rate of convergence of the form $(log n/n)^{be/(2be+d)}$. We characterize the maxisets in terms of Besov and Holder spaces of regularity $beta$.
The Riemannian geometry of covariance matrices has been essential to several successful applications, in computer vision, biomedical signal and image processing, and radar data processing. For these applications, an important ongoing challenge is to develop Riemannian-geometric tools which are adapted to structured covariance matrices. The present paper proposes to meet this challenge by introducing a new class of probability distributions, Gaussian distributions of structured covariance matrices. These are Riemannian analogs of Gaussian distributions, which only sample from covariance matrices having a preassigned structure, such as complex, Toeplitz, or block-Toeplitz. The usefulness of these distributions stems from three features: (1) they are completely tractable, analytically or numerically, when dealing with large covariance matrices, (2) they provide a statistical foundation to the concept of structured Riemannian barycentre (i.e. Frechet or geometric mean), (3) they lead to efficient statistical learning algorithms, which realise, among others, density estimation and classification of structured covariance matrices. The paper starts from the observation that several spaces of structured covariance matrices, considered from a geometric point of view, are Riemannian symmetric spaces. Accordingly, it develops an original theory of Gaussian distributions on Riemannian symmetric spaces, of their statistical inference, and of their relationship to the concept of Riemannian barycentre. Then, it uses this original theory to give a detailed description of Gaussian distributions of three kinds of structured covariance matrices, complex, Toeplitz, and block-Toeplitz. Finally, it describes algorithms for density estimation and classification of structured covariance matrices, based on Gaussian distribution mixture models.
We discuss the possibilities and limitations of estimating the mean of a real-valued random variable from independent and identically distributed observations from a non-asymptotic point of view. In particular, we define estimators with a sub-Gaussian behavior even for certain heavy-tailed distributions. We also prove various impossibility results for mean estimators.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا