ترغب بنشر مسار تعليمي؟ اضغط هنا

Light and Widely Applicable MCMC: Approximate Bayesian Inference for Large Datasets

232   0   0.0 ( 0 )
 نشر من قبل Florian Maire
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Light and Widely Applicable (LWA-) MCMC is a novel approximation of the Metropolis-Hastings kernel targeting a posterior distribution defined on a large number of observations. Inspired by Approximate Bayesian Computation, we design a Markov chain whose transition makes use of an unknown but fixed, fraction of the available data, where the random choice of sub-sample is guided by the fidelity of this sub-sample to the observed data, as measured by summary (or sufficient) statistics. LWA-MCMC is a generic and flexible approach, as illustrated by the diverse set of examples which we explore. In each case LWA-MCMC yields excellent performance and in some cases a dramatic improvement compared to existing methodologies.



قيم البحث

اقرأ أيضاً

This paper introduces a framework for speeding up Bayesian inference conducted in presence of large datasets. We design a Markov chain whose transition kernel uses an (unknown) fraction of (fixed size) of the available data that is randomly refreshed throughout the algorithm. Inspired by the Approximate Bayesian Computation (ABC) literature, the subsampling process is guided by the fidelity to the observed data, as measured by summary statistics. The resulting algorithm, Informed Sub-Sampling MCMC (ISS-MCMC), is a generic and flexible approach which, contrary to existing scalable methodologies, preserves the simplicity of the Metropolis-Hastings algorithm. Even though exactness is lost, i.e. the chain distribution approximates the posterior, we study and quantify theoretically this bias and show on a diverse set of examples that it yields excellent performances when the computational budget is limited. If available and cheap to compute, we show that setting the summary statistics as the maximum likelihood estimator is supported by theoretical arguments.
217 - Umberto Picchini 2012
Models defined by stochastic differential equations (SDEs) allow for the representation of random variability in dynamical systems. The relevance of this class of models is growing in many applied research areas and is already a standard tool to mode l e.g. financial, neuronal and population growth dynamics. However inference for multidimensional SDE models is still very challenging, both computationally and theoretically. Approximate Bayesian computation (ABC) allow to perform Bayesian inference for models which are sufficiently complex that the likelihood function is either analytically unavailable or computationally prohibitive to evaluate. A computationally efficient ABC-MCMC algorithm is proposed, halving the running time in our simulations. Focus is on the case where the SDE describes latent dynamics in state-space models; however the methodology is not limited to the state-space framework. Simulation studies for a pharmacokinetics/pharmacodynamics model and for stochastic chemical reactions are considered and a MATLAB package implementing our ABC-MCMC algorithm is provided.
119 - Jean Daunizeau 2017
Variational approaches to approximate Bayesian inference provide very efficient means of performing parameter estimation and model selection. Among these, so-called variational-Laplace or VL schemes rely on Gaussian approximations to posterior densit ies on model parameters. In this note, we review the main variants of VL approaches, that follow from considering nonlinear models of continuous and/or categorical data. En passant, we also derive a few novel theoretical results that complete the portfolio of existing analyses of variational Bayesian approaches, including investigations of their asymptotic convergence. We also suggest practical ways of extending existing VL approaches to hierarchical generative models that include (e.g., precision) hyperparameters.
We present a novel approach for the analysis of multivariate case-control georeferenced data using Bayesian inference in the context of disease mapping, where the spatial distribution of different types of cancers is analyzed. Extending other methodo logy in point pattern analysis, we propose a log-Gaussian Cox process for point pattern of cases and the controls, which accounts for risk factors, such as exposure to pollution sources, and includes a term to measure spatial residual variation. For each disease, its intensity is modeled on a baseline spatial effect (estimated from both controls and cases), a disease-specific spatial term and the effects on covariates that account for risk factors. By fitting these models the effect of the covariates on the set of cases can be assessed, and the residual spatial terms can be easily compared to detect areas of high risk not explained by the covariates. Three different types of effects to model exposure to pollution sources are considered. First of all, a fixed effect on the distance to the source. Next, smooth terms on the distance are used to model non-linear effects by means of a discrete random walk of order one and a Gaussian process in one dimension with a Matern covariance. Models are fit using the integrated nested Laplace approximation (INLA) so that the spatial terms are approximated using an approach based on solving Stochastic Partial Differential Equations (SPDE). Finally, this new framework is applied to a dataset of three different types of cancer and a set of controls from Alcala de Henares (Madrid, Spain). Covariates available include the distance to several polluting industries and socioeconomic indicators. Our findings point to a possible risk increase due to the proximity to some of these industries.
Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $Theta(n)$ in the number of data points $n$. We propose the Scalable Metrop olis-Hastings (SMH) kernel that exploits Gaussian concentration of the posterior to require processing on average only $O(1)$ or even $O(1/sqrt{n})$ data points per step. This scheme is based on a combination of factorized acceptance probabilities, procedures for fast simulation of Bernoulli processes, and control variate ideas. Contrary to many MCMC subsampling schemes such as fixed step-size Stochastic Gradient Langevin Dynamics, our approach is exact insofar as the invariant distribution is the true posterior and not an approximation to it. We characterise the performance of our algorithm theoretically, and give realistic and verifiable conditions under which it is geometrically ergodic. This theory is borne out by empirical results that demonstrate overall performance benefits over standard Metropolis-Hastings and various subsampling algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا