No Arabic abstract
Maximum simulated likelihood estimation of mixed multinomial logit (MMNL) or probit models requires evaluation of a multidimensional integral. Quasi-Monte Carlo (QMC) methods such as shuffled and scrambled Halton sequences and modified Latin hypercube sampling (MLHS) are workhorse methods for integral approximation. A few earlier studies explored the potential of sparse grid quadrature (SGQ), but this approximation suffers from negative weights. As an alternative to QMC and SGQ, we looked into the recently developed designed quadrature (DQ) method. DQ requires fewer nodes to get the same level of accuracy as of QMC and SGQ, is as easy to implement, ensures positivity of weights, and can be created on any general polynomial spaces. We benchmarked DQ against QMC in a Monte Carlo study under different data generating processes with a varying number of random parameters (3, 5, and 10) and variance-covariance structures (diagonal and full). Whereas DQ significantly outperformed QMC in the diagonal variance-covariance scenario, it could also achieve a better model fit and recover true parameters with fewer nodes (i.e., relatively lower computation time) in the full variance-covariance scenario. Finally, we evaluated the performance of DQ in a case study to understand preferences for mobility-on-demand services in New York City. In estimating MMNL with five random parameters, DQ achieved better fit and statistical significance of parameters with just 200 nodes as compared to 1000 QMC draws, making DQ around five times faster than QMC methods.
A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called data cloning for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold values, while the number of data-clones is increased to ease convergence towards an approximate maximum likelihood estimate. We show how to exploit the methodology to reduce the number of iterations of a standard ABC-MCMC algorithm and therefore reduce the computational effort, while obtaining reasonable point estimates. Simulation studies show the good performance of our approach on models with intractable likelihoods such as g-and-k distributions, stochastic differential equations and state-space models.
Mixture models are regularly used in density estimation applications, but the problem of estimating the mixing distribution remains a challenge. Nonparametric maximum likelihood produce estimates of the mixing distribution that are discrete, and these may be hard to interpret when the true mixing distribution is believed to have a smooth density. In this paper, we investigate an algorithm that produces a sequence of smooth estimates that has been conjectured to converge to the nonparametric maximum likelihood estimator. Here we give a rigorous proof of this conjecture, and propose a new data-driven stopping rule that produces smooth near-maximum likelihood estimates of the mixing density, and simulations demonstrate the quality empirical performance of this estimator.
In order to learn the complex features of large spatio-temporal data, models with large parameter sets are often required. However, estimating a large number of parameters is often infeasible due to the computational and memory costs of maximum likelihood estimation (MLE). We introduce the class of marginally parametrized (MP) models, where inference can be performed efficiently with a sequence of marginal (estimated) likelihood functions via stepwise maximum likelihood estimation (SMLE). We provide the conditions under which the stepwise estimators are consistent, and we prove that this class of models includes the diagonal vector autoregressive moving average model. We demonstrate that the parameters of this model can be obtained at least three orders of magnitude faster using SMLE compared to MLE, with only a small loss in statistical efficiency. We apply an MP model to a spatio-temporal global climate data set (in order to learn complex features of interest to climate scientists) consisting of over five million data points, and we demonstrate how estimation can be performed in less than an hour on a laptop.
Using classical simulated annealing to maximise a function $psi$ defined on a subset of $R^d$, the probability $p(psi(theta_n)leq psi_{max}-epsilon)$ tends to zero at a logarithmic rate as $n$ increases; here $theta_n$ is the state in the $n$-th stage of the simulated annealing algorithm and $psi_{max}$ is the maximal value of $psi$. We propose a modified scheme for which this probability is of order $n^{-1/3}log n$, and hence vanishes at an algebraic rate. To obtain this faster rate, the exponentially decaying acceptance probability of classical simulated annealing is replaced by a more heavy-tailed function, and the system is cooled faster. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the log-likelihood function for a state-space model.
We propose an efficient algorithm for approximate computation of the profile maximum likelihood (PML), a variant of maximum likelihood maximizing the probability of observing a sufficient statistic rather than the empirical sample. The PML has appealing theoretical properties, but is difficult to compute exactly. Inspired by observations gleaned from exactly solvable cases, we look for an approximate PML solution, which, intuitively, clumps comparably frequent symbols into one symbol. This amounts to lower-bounding a certain matrix permanent by summing over a subgroup of the symmetric group rather than the whole group during the computation. We extensively experiment with the approximate solution, and find the empirical performance of our approach is competitive and sometimes significantly better than state-of-the-art performance for various estimation problems.