ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal estimation of Gaussian mixtures via denoised method of moments

193   0   0.0 ( 0 )
 نشر من قبل Pengkun Yang
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The Method of Moments [Pea94] is one of the most widely used methods in statistics for parameter estimation, by means of solving the system of equations that match the population and estimated moments. However, in practice and especially for the important case of mixture models, one frequently needs to contend with the difficulties of non-existence or non-uniqueness of statistically meaningful solutions, as well as the high computational cost of solving large polynomial systems. Moreover, theoretical analysis of the method of moments are mainly confined to asymptotic normality style of results established under strong assumptions. This paper considers estimating a $k$-component Gaussian location mixture with a common (possibly unknown) variance parameter. To overcome the aforementioned theoretic and algorithmic hurdles, a crucial step is to denoise the moment estimates by projecting to the truncated moment space (via semidefinite programming) before solving the method of moments equations. Not only does this regularization ensures existence and uniqueness of solutions, it also yields fast solvers by means of Gauss quadrature. Furthermore, by proving new moment comparison theorems in the Wasserstein distance via polynomial interpolation and majorization techniques, we establish the statistical guarantees and adaptive optimality of the proposed procedure, as well as oracle inequality in misspecified models. These results can also be viewed as provable algorithms for Generalized Method of Moments [Han82] which involves non-convex optimization and lacks theoretical guarantees.

قيم البحث

اقرأ أيضاً

This paper studies the optimal rate of estimation in a finite Gaussian location mixture model in high dimensions without separation conditions. We assume that the number of components $k$ is bounded and that the centers lie in a ball of bounded radiu s, while allowing the dimension $d$ to be as large as the sample size $n$. Extending the one-dimensional result of Heinrich and Kahn cite{HK2015}, we show that the minimax rate of estimating the mixing distribution in Wasserstein distance is $Theta((d/n)^{1/4} + n^{-1/(4k-2)})$, achieved by an estimator computable in time $O(nd^2+n^{5/4})$. Furthermore, we show that the mixture density can be estimated at the optimal parametric rate $Theta(sqrt{d/n})$ in Hellinger distance and provide a computationally efficient algorithm to achieve this rate in the special case of $k=2$. Both the theoretical and methodological development rely on a careful application of the method of moments. Central to our results is the observation that the information geometry of finite Gaussian mixtures is characterized by the moment tensors of the mixing distribution, whose low-rank structure can be exploited to obtain a sharp local entropy bound.
For two vast families of mixture distributions and a given prior, we provide unified representations of posterior and predictive distributions. Model applications presented include bivariate mixtures of Gamma distributions labelled as Kibble-type, no n-central Chi-square and F distributions, the distribution of $R^2$ in multiple regression, variance mixture of normal distributions, and mixtures of location-scale exponential distributions including the multivariate Lomax distribution. An emphasis is also placed on analytical representations and the relationships with a host of existing distributions and several hypergeomtric functions of one or two variables.
We study a problem of estimation of smooth functionals of parameter $theta $ of Gaussian shift model $$ X=theta +xi, theta in E, $$ where $E$ is a separable Banach space and $X$ is an observation of unknown vector $theta$ in Gaussian noise $xi$ with zero mean and known covariance operator $Sigma.$ In particular, we develop estimators $T(X)$ of $f(theta)$ for functionals $f:Emapsto {mathbb R}$ of Holder smoothness $s>0$ such that $$ sup_{|theta|leq 1} {mathbb E}_{theta}(T(X)-f(theta))^2 lesssim Bigl(|Sigma| vee ({mathbb E}|xi|^2)^sBigr)wedge 1, $$ where $|Sigma|$ is the operator norm of $Sigma,$ and show that this mean squared error rate is minimax optimal at least in the case of standard Gaussian shift model ($E={mathbb R}^d$ equipped with the canonical Euclidean norm, $xi =sigma Z,$ $Zsim {mathcal N}(0;I_d)$). Moreover, we determine a sharp threshold on the smoothness $s$ of functional $f$ such that, for all $s$ above the threshold, $f(theta)$ can be estimated efficiently with a mean squared error rate of the order $|Sigma|$ in a small noise setting (that is, when ${mathbb E}|xi|^2$ is small). The construction of efficient estimators is crucially based on a bootstrap chain method of bias reduction. The results could be applied to a variety of special high-dimensional and infinite-dimensional Gaussian models (for vector, matrix and functional data).
97 - Baptiste Broto 2020
In this paper, we address the estimation of the sensitivity indices called Shapley eects. These sensitivity indices enable to handle dependent input variables. The Shapley eects are generally dicult to estimate, but they are easily computable in the Gaussian linear framework. The aim of this work is to use the values of the Shapley eects in an approximated Gaussian linear framework as estimators of the true Shapley eects corresponding to a non-linear model. First, we assume that the input variables are Gaussian with small variances. We provide rates of convergence of the estimated Shapley eects to the true Shapley eects. Then, we focus on the case where the inputs are given by an non-Gaussian empirical mean. We prove that, under some mild assumptions, when the number of terms in the empirical mean increases, the dierence between the true Shapley eects and the estimated Shapley eects given by the Gaussian linear approximation converges to 0. Our theoretical results are supported by numerical studies, showing that the Gaussian linear approximation is accurate and enables to decrease the computational time signicantly.
We study minimax estimation of two-dimensional totally positive distributions. Such distributions pertain to pairs of strongly positively dependent random variables and appear frequently in statistics and probability. In particular, for distributions with $beta$-Holder smooth densities where $beta in (0, 2)$, we observe polynomially faster minimax rates of estimation when, additionally, the total positivity condition is imposed. Moreover, we demonstrate fast algorithms to compute the proposed estimators and corroborate the theoretical rates of estimation by simulation studies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا