No Arabic abstract
Let $pi_1$ and $pi_2$ be two independent populations, where the population $pi_i$ follows a bivariate normal distribution with unknown mean vector $boldsymbol{theta}^{(i)}$ and common known variance-covariance matrix $Sigma$, $i=1,2$. The present paper is focused on estimating a characteristic $theta_{textnormal{y}}^S$ of the selected bivariate normal population, using a LINEX loss function. A natural selection rule is used for achieving the aim of selecting the best bivariate normal population. Some natural-type estimators and Bayes estimator (using a conjugate prior) of $theta_{textnormal{y}}^S$ are presented. An admissible subclass of equivariant estimators, using the LINEX loss function, is obtained. Further, a sufficient condition for improving the competing estimators of $theta_{textnormal{y}}^S$ is derived. Using this sufficient condition, several estimators improving upon the proposed natural estimators are obtained. Further, a real data example is provided for illustration purpose. Finally, a comparative study on the competing estimators of $theta_{text{y}}^S$ is carried-out using simulation.
Consider estimating the n by p matrix of means of an n by p matrix of independent normally distributed observations with constant variance, where the performance of an estimator is judged using a p by p matrix quadratic error loss function. A matrix version of the James-Stein estimator is proposed, depending on a tuning constant. It is shown to dominate the usual maximum likelihood estimator for some choices of of the tuning constant when n is greater than or equal to 3. This result also extends to other shrinkage estimators and settings.
In the Gaussian linear regression model (with unknown mean and variance), we show that the standard confidence set for one or two regression coefficients is admissible in the sense of Joshi (1969). This solves a long-standing open problem in mathematical statistics, and this has important implications on the performance of modern inference procedures post-model-selection or post-shrinkage, particularly in situations where the number of parameters is larger than the sample size. As a technical contribution of independent interest, we introduce a new class of conjugate priors for the Gaussian location-scale model.
Bayesian methods are developed for the multivariate nonparametric regression problem where the domain is taken to be a compact Riemannian manifold. In terms of the latter, the underlying geometry of the manifold induces certain symmetries on the multivariate nonparametric regression function. The Bayesian approach then allows one to incorporate hierarchical Bayesian methods directly into the spectral structure, thus providing a symmetry-adaptive multivariate Bayesian function estimator. One can also diffuse away some prior information in which the limiting case is a smoothing spline on the manifold. This, together with the result that the smoothing spline solution obtains the minimax rate of convergence in the multivariate nonparametric regression problem, provides good frequentist properties for the Bayes estimators. An application to astronomy is included.
We investigate predictive density estimation under the $L^2$ Wasserstein loss for location families and location-scale families. We show that plug-in densities form a complete class and that the Bayesian predictive density is given by the plug-in density with the posterior mean of the location and scale parameters. We provide Bayesian predictive densities that dominate the best equivariant one in normal models.
A sum of observations derived by a simple random sampling design from a population of independent random variables is studied. A procedure finding a general term of Edgeworth asymptotic expansion is presented. The Lindeberg condition of asymptotic normality, Berry-Esseen bound, Edgeworth asymptotic expansions under weakened conditions and Cramer type large deviation results are derived.