ترغب بنشر مسار تعليمي؟ اضغط هنا

An Extended Empirical Saddlepoint Approximation for Intractable Likelihoods

101   0   0.0 ( 0 )
 نشر من قبل Matteo Fasiolo
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The challenges posed by complex stochastic models used in computational ecology, biology and genetics have stimulated the development of approximate approaches to statistical inference. Here we focus on Synthetic Likelihood (SL), a procedure that reduces the observed and simulated data to a set of summary statistics, and quantifies the discrepancy between them through a synthetic likelihood function. SL requires little tuning, but it relies on the approximate normality of the summary statistics. We relax this assumption by proposing a novel, more flexible, density estimator: the Extended Empirical Saddlepoint approximation. In addition to proving the consistency of SL, under either the new or the Gaussian density estimator, we illustrate the method using two examples. One of these is a complex individual-based forest model for which SL offers one of the few practical possibilities for statistical inference. The examples show that the new density estimator is able to capture large departures from normality, while being scalable to high dimensions, and this in turn leads to more accurate parameter estimates, relative to the Gaussian alternative. The new density estimator is implemented by the esaddle R package, which can be found on the Comprehensive R Archive Network (CRAN).



قيم البحث

اقرأ أيضاً

A large number of statistical models are doubly-intractable: the likelihood normalising term, which is a function of the model parameters, is intractable, as well as the marginal likelihood (model evidence). This means that standard inference techniq ues to sample from the posterior, such as Markov chain Monte Carlo (MCMC), cannot be used. Examples include, but are not confined to, massive Gaussian Markov random fields, autologistic models and Exponential random graph models. A number of approximate schemes based on MCMC techniques, Approximate Bayesian computation (ABC) or analytic approximations to the posterior have been suggested, and these are reviewed here. Exact MCMC schemes, which can be applied to a subset of doubly-intractable distributions, have also been developed and are described in this paper. As yet, no general method exists which can be applied to all classes of models with doubly-intractable posteriors. In addition, taking inspiration from the Physics literature, we study an alternative method based on representing the intractable likelihood as an infinite series. Unbiased estimates of the likelihood can then be obtained by finite time stochastic truncation of the series via Russian Roulette sampling, although the estimates are not necessarily positive. Results from the Quantum Chromodynamics literature are exploited to allow the use of possibly negative estimates in a pseudo-marginal MCMC scheme such that expectations with respect to the posterior distribution are preserved. The methodology is reviewed on well-known examples such as the parameters in Ising models, the posterior for Fisher-Bingham distributions on the $d$-Sphere and a large-scale Gaussian Markov Random Field model describing the Ozone Column data. This leads to a critical assessment of the strengths and weaknesses of the methodology with pointers to ongoing research.
102 - Bokgyeong Kang , John Hughes , 2021
Models with intractable normalizing functions have numerous applications ranging from network models to image analysis to spatial point processes. Because the normalizing constants are functions of the parameters of interest, standard Markov chain Mo nte Carlo cannot be used for Bayesian inference for these models. A number of algorithms have been developed for such models. Some have the posterior distribution as the asymptotic distribution. Other asymptotically inexact algorithms do not possess this property. There is limited guidance for evaluating approximations based on these algorithms, and hence it is very hard to tune them. We propose two new diagnostics that address these problems for intractable normalizing function models. Our first diagnostic, inspired by the second Bartlett identity, applies in principle to any asymptotically exact or inexact algorithm. We develop an approximate version of this new diagnostic that is applicable to intractable normalizing function problems. Our second diagnostic is a Monte Carlo approximation to a kernel Stein discrepancy-based diagnostic introduced by Gorham and Mackey (2017). We provide theoretical justification for our methods. We apply our diagnostics to several algorithms in the context of challenging simulated and real data examples, including an Ising model, an exponential random graph model, and a Markov point process.
This article surveys computational methods for posterior inference with intractable likelihoods, that is where the likelihood function is unavailable in closed form, or where evaluation of the likelihood is infeasible. We review recent developments i n pseudo-marginal methods, approximate Bayesian computation (ABC), the exchange algorithm, thermodynamic integration, and composite likelihood, paying particular attention to advancements in scalability for large datasets. We also mention R and MATLAB source code for implementations of these algorithms, where they are available.
Synthetic likelihood (SL) is a strategy for parameter inference when the likelihood function is analytically or computationally intractable. In SL, the likelihood function of the data is replaced by a multivariate Gaussian density over summary statis tics of the data. SL requires simulation of many replicate datasets at every parameter value considered by a sampling algorithm, such as MCMC, making the method computationally-intensive. We propose two strategies to alleviate the computational burden imposed by SL algorithms. We first introduce a novel MCMC algorithm for SL where the proposal distribution is sequentially tuned and is also made conditional to data, thus it rapidly guides the proposed parameters towards high posterior probability regions. Second, we exploit strategies borrowed from the correlated pseudo-marginal MCMC literature, to improve the chains mixing in a SL framework. Our methods enable inference for challenging case studies when the chain is initialised in low posterior probability regions of the parameter space, where standard samplers failed. Our guided sampler can also be potentially used with MCMC samplers for approximate Bayesian computation (ABC). Our goal is to provide ways to make the best out of each expensive MCMC iteration, which will broaden the scope of likelihood-free inference for models with costly simulators. To illustrate the advantages stemming from our framework we consider four benchmark examples, including estimation of parameters for a cosmological model and a stochastic model with highly non-Gaussian summary statistics.
Markov chain Monte Carlo methods for intractable likelihoods, such as the exchange algorithm, require simulations of the sufficient statistics at every iteration of the Markov chain, which often result in expensive computations. Surrogate models for the likelihood function have been developed to accelerate inference algorithms in this context. However, these surrogate models tend to be relatively inflexible, and often provide a poor approximation to the true likelihood function. In this article, we propose the use of a warped, gradient-enhanced, Gaussian process surrogate model for the likelihood function, which jointly models the sample means and variances of the sufficient statistics, and uses warping functions to capture covariance nonstationarity in the input parameter space. We show that both the consideration of nonstationarity and the inclusion of gradient information can be leveraged to obtain a surrogate model that outperforms the conventional stationary Gaussian process surrogate model when making inference, particularly in regions where the likelihood function exhibits a phase transition. We also show that the proposed surrogate model can be used to improve the effective sample size per unit time when embedded in exact inferential algorithms. The utility of our approach in speeding up inferential algorithms is demonstrated on simulated and real-world data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا