Do you want to publish a course? Click here

Randomized Reduced Forward Models for Efficient Metropolis--Hastings MCMC, with Application to Subsurface Fluid Flow and Capacitance Tomography

86   0   0.0 ( 0 )
 Added by Colin Fox
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Bayesian modelling and computational inference by Markov chain Monte Carlo (MCMC) is a principled framework for large-scale uncertainty quantification, though is limited in practice by computational cost when implemented in the simplest form that requires simulating an accurate computer model at each iteration of the MCMC. The delayed acceptance Metropolis--Hastings MCMC leverages a reduced model for the forward map to lower the compute cost per iteration, though necessarily reduces statistical efficiency that can, without care, lead to no reduction in the computational cost of computing estimates to a desired accuracy. Randomizing the reduced model for the forward map can dramatically improve computational efficiency, by maintaining the low cost per iteration but also avoiding appreciable loss of statistical efficiency. Randomized maps are constructed by a posteriori adaptive tuning of a randomized and locally-corrected deterministic reduced model. Equivalently, the approximated posterior distribution may be viewed as induced by a modified likelihood function for use with the reduced map, with parameters tuned to optimize the quality of the approximation to the correct posterior distribution. Conditions for adaptive MCMC algorithms allow practical approximations and algorithms that have guaranteed ergodicity for the target distribution. Good statistical and computational efficiencies are demonstrated in examples of calibration of large-scale numerical models of geothermal reservoirs and electrical capacitance tomography.



rate research

Read More

We propose a new kernel for Metropolis Hastings called Directional Metropolis Hastings (DMH) with multivariate update where the proposal kernel has state dependent covariance matrix. We use the derivative of the target distribution at the current state to change the orientation of the proposal distribution, therefore producing a more plausible proposal. We study the conditions for geometric ergodicity of our algorithm and provide necessary and sufficient conditions for convergence. We also suggest a scheme for adaptively update the variance parameter and study the conditions of ergodicity of the adaptive algorithm. We demonstrate the performance of our algorithm in a Bayesian generalized linear model problem.
MCMC algorithms such as Metropolis-Hastings algorithms are slowed down by the computation of complex target distributions as exemplified by huge datasets. We offer in this paper an approach to reduce the computational costs of such algorithms by a simple and universal divide-and-conquer strategy. The idea behind the generic acceleration is to divide the acceptance step into several parts, aiming at a major reduction in computing time that outranks the corresponding reduction in acceptance probability. The division decomposes the prior x likelihood term into a product such that some of its components are much cheaper to compute than others. Each of the components can be sequentially compared with a uniform variate, the first rejection signalling that the proposed value is considered no further, This approach can in turn be accelerated as part of a prefetching algorithm taking advantage of the parallel abilities of the computer at hand. We illustrate those accelerating features on a series of toy and realistic examples.
The ability to generate samples of the random effects from their conditional distributions is fundamental for inference in mixed effects models. Random walk Metropolis is widely used to conduct such sampling, but such a method can converge slowly for medium dimension problems, or when the joint structure of the distributions to sample is complex. We propose a Metropolis Hastings (MH) algorithm based on a multidimensional Gaussian proposal that takes into account the joint conditional distribution of the random effects and does not require any tuning, in contrast with more sophisticated samplers such as the Metropolis Adjusted Langevin Algorithm or the No-U-Turn Sampler that involve costly tuning runs or intensive computation. Indeed, this distribution is automatically obtained thanks to a Laplace approximation of the original model. We show that such approximation is equivalent to linearizing the model in the case of continuous data. Numerical experiments based on real data highlight the very good performances of the proposed method for continuous data model.
Can we make Bayesian posterior MCMC sampling more efficient when faced with very large datasets? We argue that computing the likelihood for N datapoints in the Metropolis-Hastings (MH) test to reach a single binary decision is computationally inefficient. We introduce an approximate MH rule based on a sequential hypothesis test that allows us to accept or reject samples with high confidence using only a fraction of the data required for the exact MH rule. While this method introduces an asymptotic bias, we show that this bias can be controlled and is more than offset by a decrease in variance due to our ability to draw more samples per unit of time.
216 - Somak Dutta 2010
In this article we propose multiplication based random walk Metropolis Hastings (MH) algorithm on the real line. We call it the random dive MH (RDMH) algorithm. This algorithm, even if simple to apply, was not studied earlier in Markov chain Monte Carlo literature. The associated kernel is shown to have standard properties like irreducibility, aperiodicity and Harris recurrence under some mild assumptions. These ensure basic convergence (ergodicity) of the kernel. Further the kernel is shown to be geometric ergodic for a large class of target densities on $mathbb{R}$. This class even contains realistic target densities for which random walk or Langevin MH are not geometrically ergodic. Three simulation studies are given to demonstrate the mixing property and superiority of RDMH to standard MH algorithms on real line. A share-price return data is also analyzed and the results are compared with those available in the literature.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا