ترغب بنشر مسار تعليمي؟ اضغط هنا

Improved estimation of the MSEs and the MSE matrices for shrinkage estimators of multivariate normal means and their applications

70   0   0.0 ( 0 )
 نشر من قبل Hisayuki Hara
 تاريخ النشر 2007
  مجال البحث الاحصاء الرياضي
والبحث باللغة English
 تأليف Hisayuki Hara




اسأل ChatGPT حول البحث

In this article we provide some nonnegative and positive estimators of the mean squared errors(MSEs) for shrinkage estimators of multivariate normal means. Proposed estimators are shown to improve on the uniformly minimum variance unbiased estimator(UMVUE) under a quadratic loss criterion. A similar improvement is also obtained for the estimators of the MSE matrices for shrinkage estimators. We also apply the proposed estimators of the MSE matrix to form confidence sets centered at shrinkage estimators and show their usefulness through numerical experiments.

قيم البحث

اقرأ أيضاً

The James-Stein estimator is an estimator of the multivariate normal mean and dominates the maximum likelihood estimator (MLE) under squared error loss. The original work inspired great interest in developing shrinkage estimators for a variety of pro blems. Nonetheless, research on shrinkage estimation for manifold-valued data is scarce. In this paper, we propose shrinkage estimators for the parameters of the Log-Normal distribution defined on the manifold of $N times N$ symmetric positive-definite matrices. For this manifold, we choose the Log-Euclidean metric as its Riemannian metric since it is easy to compute and is widely used in applications. By using the Log-Euclidean distance in the loss function, we derive a shrinkage estimator in an analytic form and show that it is asymptotically optimal within a large class of estimators including the MLE, which is the sample Frechet mean of the data. We demonstrate the performance of the proposed shrinkage estimator via several simulated data experiments. Furthermore, we apply the shrinkage estimator to perform statistical inference in diffusion magnetic resonance imaging problems.
The problem of reducing the bias of maximum likelihood estimator in a general multivariate elliptical regression model is considered. The model is very flexible and allows the mean vector and the dispersion matrix to have parameters in common. Many f requently used models are special cases of this general formulation, namely: errors-in-variables models, nonlinear mixed-effects models, heteroscedastic nonlinear models, among others. In any of these models, the vector of the errors may have any multivariate elliptical distribution. We obtain the second-order bias of the maximum likelihood estimator, a bias-corrected estimator, and a bias-reduced estimator. Simulation results indicate the effectiveness of the bias correction and bias reduction schemes.
Markov chain Monte Carlo (MCMC) algorithms are used to estimate features of interest of a distribution. The Monte Carlo error in estimation has an asymptotic normal distribution whose multivariate nature has so far been ignored in the MCMC community. We present a class of multivariate spectral variance estimators for the asymptotic covariance matrix in the Markov chain central limit theorem and provide conditions for strong consistency. We examine the finite sample properties of the multivariate spectral variance estimators and its eigenvalues in the context of a vector autoregressive process of order 1.
Let $bbZ_{M_1times N}=bbT^{frac{1}{2}}bbX$ where $(bbT^{frac{1}{2}})^2=bbT$ is a positive definite matrix and $bbX$ consists of independent random variables with mean zero and variance one. This paper proposes a unified matrix model $$bold{bbom}=(bbZ bbU_2bbU_2^TbbZ^T)^{-1}bbZbbU_1bbU_1^TbbZ^T,$$ where $bbU_1$ and $bbU_2$ are isometric with dimensions $Ntimes N_1$ and $Ntimes (N-N_2)$ respectively such that $bbU_1^TbbU_1=bbI_{N_1}$, $bbU_2^TbbU_2=bbI_{N-N_2}$ and $bbU_1^TbbU_2=0$. Moreover, $bbU_1$ and $bbU_2$ (random or non-random) are independent of $bbZ_{M_1times N}$ and with probability tending to one, $rank(bbU_1)=N_1$ and $rank(bbU_2)=N-N_2$. We establish the asymptotic Tracy-Widom distribution for its largest eigenvalue under moment assumptions on $bbX$ when $N_1,N_2$ and $M_1$ are comparable. By selecting appropriate matrices $bbU_1$ and $bbU_2$, the asymptotic distributions of the maximum eigenvalues of the matrices used in Canonical Correlation Analysis (CCA) and of F matrices (including centered and non-center
Data in non-Euclidean spaces are commonly encountered in many fields of Science and Engineering. For instance, in Robotics, attitude sensors capture orientation which is an element of a Lie group. In the recent past, several researchers have reported methods that take into account the geometry of Lie Groups in designing parameter estimation algorithms in nonlinear spaces. Maximum likelihood estimators (MLE) are quite commonly used for such tasks and it is well known in the field of statistics that Steins shrinkage estimators dominate the MLE in a mean-squared sense assuming the observations are from a normal population. In this paper, we present a novel shrinkage estimator for data residing in Lie groups, specifically, abelian or compact Lie groups. The key theoretical results presented in this paper are: (i) Steins Lemma and its proof for Lie groups and, (ii) proof of dominance of the proposed shrinkage estimator over MLE for abelian and compact Lie groups. We present examples of simulation studies of the dominance of the proposed shrinkage estimator and an application of shrinkage estimation to multiple-robot localization.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا