ترغب بنشر مسار تعليمي؟ اضغط هنا

A fast algorithm for maximum likelihood estimation of mixture proportions using sequential quadratic programming

90   0   0.0 ( 0 )
 نشر من قبل Peter Carbonetto
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Maximum likelihood estimation of mixture proportions has a long history, and continues to play an important role in modern statistics, including in development of nonparametric empirical Bayes methods. Maximum likelihood of mixture proportions has traditionally been solved using the expectation maximization (EM) algorithm, but recent work by Koenker & Mizera shows that modern convex optimization techniques -- in particular, interior point methods -- are substantially faster and more accurate than EM. Here, we develop a new solution based on sequential quadratic programming (SQP). It is substantially faster than the interior point method, and just as accurate. Our approach combines several ideas: first, it solves a reformulation of the original problem; second, it uses an SQP approach to make the best use of the expensive gradient and Hessian computations; third, the SQP iterations are implemented using an active set method to exploit the sparse nature of the quadratic subproblems; fourth, it uses accurate low-rank approximations for more efficient gradient and Hessian computations. We illustrate the benefits of our approach in experiments on synthetic data sets as well as a large genetic association data set. In large data sets (n = 1,000,000 observations, m = 1,000 mixture components), our implementation achieves at least 100-fold reduction in runtime compared with a state-of-the-art interior point solver. Our methods are implemented in Julia, and in an R package available on CRAN (see https://CRAN.R-project.org/package=mixsqp).



قيم البحث

اقرأ أيضاً

Mixture models are regularly used in density estimation applications, but the problem of estimating the mixing distribution remains a challenge. Nonparametric maximum likelihood produce estimates of the mixing distribution that are discrete, and thes e may be hard to interpret when the true mixing distribution is believed to have a smooth density. In this paper, we investigate an algorithm that produces a sequence of smooth estimates that has been conjectured to converge to the nonparametric maximum likelihood estimator. Here we give a rigorous proof of this conjecture, and propose a new data-driven stopping rule that produces smooth near-maximum likelihood estimates of the mixing density, and simulations demonstrate the quality empirical performance of this estimator.
216 - Cheng Wang , Binyan Jiang 2018
The estimation of high dimensional precision matrices has been a central topic in statistical learning. However, as the number of parameters scales quadratically with the dimension $p$, many state-of-the-art methods do not scale well to solve problem s with a very large $p$. In this paper, we propose a very efficient algorithm for precision matrix estimation via penalized quadratic loss functions. Under the high dimension low sample size setting, the computation complexity of our algorithm is linear in both the sample size and the number of parameters. Such a computation complexity is in some sense optimal, as it is the same as the complexity needed for computing the sample covariance matrix. Numerical studies show that our algorithm is much more efficient than other state-of-the-art methods when the dimension $p$ is very large.
175 - Jing Yu , Mihai Anitescu 2019
We propose an optimization algorithm to compute the optimal sensor locations in experimental design in the formulation of Bayesian inverse problems, where the parameter-to-observable mapping is described through an integral equation and its discretiz ation results in a continuously indexed matrix whose size depends on the mesh size n. By approximating the gradient and Hessian of the objective design criterion from Chebyshev interpolation, we solve a sequence of quadratic programs and achieve the complexity $mathcal{O}(nlog^2(n))$. An error analysis guarantees the integrality gap shrinks to zero as $ntoinfty$, and we apply the algorithm on a two-dimensional advection-diffusion equation, to determine the LIDARs optimal sensing directions for data collection.
In order to learn the complex features of large spatio-temporal data, models with large parameter sets are often required. However, estimating a large number of parameters is often infeasible due to the computational and memory costs of maximum likel ihood estimation (MLE). We introduce the class of marginally parametrized (MP) models, where inference can be performed efficiently with a sequence of marginal (estimated) likelihood functions via stepwise maximum likelihood estimation (SMLE). We provide the conditions under which the stepwise estimators are consistent, and we prove that this class of models includes the diagonal vector autoregressive moving average model. We demonstrate that the parameters of this model can be obtained at least three orders of magnitude faster using SMLE compared to MLE, with only a small loss in statistical efficiency. We apply an MP model to a spatio-temporal global climate data set (in order to learn complex features of interest to climate scientists) consisting of over five million data points, and we demonstrate how estimation can be performed in less than an hour on a laptop.
Maximum simulated likelihood estimation of mixed multinomial logit (MMNL) or probit models requires evaluation of a multidimensional integral. Quasi-Monte Carlo (QMC) methods such as shuffled and scrambled Halton sequences and modified Latin hypercub e sampling (MLHS) are workhorse methods for integral approximation. A few earlier studies explored the potential of sparse grid quadrature (SGQ), but this approximation suffers from negative weights. As an alternative to QMC and SGQ, we looked into the recently developed designed quadrature (DQ) method. DQ requires fewer nodes to get the same level of accuracy as of QMC and SGQ, is as easy to implement, ensures positivity of weights, and can be created on any general polynomial spaces. We benchmarked DQ against QMC in a Monte Carlo study under different data generating processes with a varying number of random parameters (3, 5, and 10) and variance-covariance structures (diagonal and full). Whereas DQ significantly outperformed QMC in the diagonal variance-covariance scenario, it could also achieve a better model fit and recover true parameters with fewer nodes (i.e., relatively lower computation time) in the full variance-covariance scenario. Finally, we evaluated the performance of DQ in a case study to understand preferences for mobility-on-demand services in New York City. In estimating MMNL with five random parameters, DQ achieved better fit and statistical significance of parameters with just 200 nodes as compared to 1000 QMC draws, making DQ around five times faster than QMC methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا