ترغب بنشر مسار تعليمي؟ اضغط هنا

On shrinkage estimation of a spherically symmetric distribution for balanced loss functions

280   0   0.0 ( 0 )
 نشر من قبل \\'Eric Marchand
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of estimating the mean vector $theta$ of a $d$-dimensional spherically symmetric distributed $X$ based on balanced loss functions of the forms: {bf (i)} $omega rho(|de-de_{0}|^{2}) +(1-omega)rho(|de - theta|^{2})$ and {bf (ii)} $ellleft(omega |de - de_{0}|^{2} +(1-omega)|de - theta|^{2}right)$, where $delta_0$ is a target estimator, and where $rho$ and $ell$ are increasing and concave functions. For $dgeq 4$ and the target estimator $delta_0(X)=X$, we provide Baranchik-type estimators that dominate $delta_0(X)=X$ and are minimax. The findings represent extensions of those of Marchand & Strawderman (cite{ms2020}) in two directions: {bf (a)} from scale mixture of normals to the spherical class of distributions with Lebesgue densities and {bf (b)} from completely monotone to concave $rho$ and $ell$.



قيم البحث

اقرأ أيضاً

Consider estimating the n by p matrix of means of an n by p matrix of independent normally distributed observations with constant variance, where the performance of an estimator is judged using a p by p matrix quadratic error loss function. A matrix version of the James-Stein estimator is proposed, depending on a tuning constant. It is shown to dominate the usual maximum likelihood estimator for some choices of of the tuning constant when n is greater than or equal to 3. This result also extends to other shrinkage estimators and settings.
The James-Stein estimator is an estimator of the multivariate normal mean and dominates the maximum likelihood estimator (MLE) under squared error loss. The original work inspired great interest in developing shrinkage estimators for a variety of pro blems. Nonetheless, research on shrinkage estimation for manifold-valued data is scarce. In this paper, we propose shrinkage estimators for the parameters of the Log-Normal distribution defined on the manifold of $N times N$ symmetric positive-definite matrices. For this manifold, we choose the Log-Euclidean metric as its Riemannian metric since it is easy to compute and is widely used in applications. By using the Log-Euclidean distance in the loss function, we derive a shrinkage estimator in an analytic form and show that it is asymptotically optimal within a large class of estimators including the MLE, which is the sample Frechet mean of the data. We demonstrate the performance of the proposed shrinkage estimator via several simulated data experiments. Furthermore, we apply the shrinkage estimator to perform statistical inference in diffusion magnetic resonance imaging problems.
Data in non-Euclidean spaces are commonly encountered in many fields of Science and Engineering. For instance, in Robotics, attitude sensors capture orientation which is an element of a Lie group. In the recent past, several researchers have reported methods that take into account the geometry of Lie Groups in designing parameter estimation algorithms in nonlinear spaces. Maximum likelihood estimators (MLE) are quite commonly used for such tasks and it is well known in the field of statistics that Steins shrinkage estimators dominate the MLE in a mean-squared sense assuming the observations are from a normal population. In this paper, we present a novel shrinkage estimator for data residing in Lie groups, specifically, abelian or compact Lie groups. The key theoretical results presented in this paper are: (i) Steins Lemma and its proof for Lie groups and, (ii) proof of dominance of the proposed shrinkage estimator over MLE for abelian and compact Lie groups. We present examples of simulation studies of the dominance of the proposed shrinkage estimator and an application of shrinkage estimation to multiple-robot localization.
106 - Aurelie Boisbunon 2013
In this article, we develop a modern perspective on Akaikes Information Criterion and Mallows Cp for model selection. Despite the diff erences in their respective motivation, they are equivalent in the special case of Gaussian linear regression. In t his case they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from loss estimation theory. Our first contribution is to provide an explicit link between loss estimation and model selection through a new oracle inequality. We then show that the form of the unbiased estimator of the quadratic prediction loss under a Gaussian assumption still holds under a more general distributional assumption, the family of spherically symmetric distributions. One of the features of our results is that our criterion does not rely on the speci ficity of the distribution, but only on its spherical symmetry. Also this family of laws o ffers some dependence property between the observations, a case not often studied.
138 - Van Ha Hoang 2017
We consider a stochastic individual-based model in continuous time to describe a size-structured population for cell divisions. This model is motivated by the detection of cellular aging in biology. We address here the problem of nonparametric estima tion of the kernel ruling the divisions based on the eigenvalue problem related to the asymptotic behavior in large population. This inverse problem involves a multiplicative deconvolution operator. Using Fourier technics we derive a nonparametric estimator whose consistency is studied. The main difficulty comes from the non-standard equations connecting the Fourier transforms of the kernel and the parameters of the model. A numerical study is carried out and we pay special attention to the derivation of bandwidths by using resampling.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا