ترغب بنشر مسار تعليمي؟ اضغط هنا

Directional Metropolis-Hastings

123   0   0.0 ( 0 )
 نشر من قبل Abhirup Mallik
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a new kernel for Metropolis Hastings called Directional Metropolis Hastings (DMH) with multivariate update where the proposal kernel has state dependent covariance matrix. We use the derivative of the target distribution at the current state to change the orientation of the proposal distribution, therefore producing a more plausible proposal. We study the conditions for geometric ergodicity of our algorithm and provide necessary and sufficient conditions for convergence. We also suggest a scheme for adaptively update the variance parameter and study the conditions of ergodicity of the adaptive algorithm. We demonstrate the performance of our algorithm in a Bayesian generalized linear model problem.



قيم البحث

اقرأ أيضاً

MCMC algorithms such as Metropolis-Hastings algorithms are slowed down by the computation of complex target distributions as exemplified by huge datasets. We offer in this paper an approach to reduce the computational costs of such algorithms by a si mple and universal divide-and-conquer strategy. The idea behind the generic acceleration is to divide the acceptance step into several parts, aiming at a major reduction in computing time that outranks the corresponding reduction in acceptance probability. The division decomposes the prior x likelihood term into a product such that some of its components are much cheaper to compute than others. Each of the components can be sequentially compared with a uniform variate, the first rejection signalling that the proposed value is considered no further, This approach can in turn be accelerated as part of a prefetching algorithm taking advantage of the parallel abilities of the computer at hand. We illustrate those accelerating features on a series of toy and realistic examples.
225 - Somak Dutta 2010
In this article we propose multiplication based random walk Metropolis Hastings (MH) algorithm on the real line. We call it the random dive MH (RDMH) algorithm. This algorithm, even if simple to apply, was not studied earlier in Markov chain Monte Ca rlo literature. The associated kernel is shown to have standard properties like irreducibility, aperiodicity and Harris recurrence under some mild assumptions. These ensure basic convergence (ergodicity) of the kernel. Further the kernel is shown to be geometric ergodic for a large class of target densities on $mathbb{R}$. This class even contains realistic target densities for which random walk or Langevin MH are not geometrically ergodic. Three simulation studies are given to demonstrate the mixing property and superiority of RDMH to standard MH algorithms on real line. A share-price return data is also analyzed and the results are compared with those available in the literature.
Statistical Data Assimilation (SDA) is the transfer of information from field or laboratory observations to a user selected model of the dynamical system producing those observations. The data is noisy and the model has errors; the information transf er addresses properties of the conditional probability distribution of the states of the model conditioned on the observations. The quantities of interest in SDA are the conditional expected values of functions of the model state, and these require the approximate evaluation of high dimensional integrals. We introduce a conditional probability distribution and use the Laplace method with annealing to identify the maxima of the conditional probability distribution. The annealing method slowly increases the precision term of the model as it enters the Laplace method. In this paper, we extend the idea of precision annealing (PA) to Monte Carlo calculations of conditional expected values using Metropolis-Hastings methods.
This paper develops a Bayesian computational platform at the interface between posterior sampling and optimization in models whose marginal likelihoods are difficult to evaluate. Inspired by adversarial optimization, namely Generative Adversarial Net works (GAN), we reframe the likelihood function estimation problem as a classification problem. Pitting a Generator, who simulates fake data, against a Classifier, who tries to distinguish them from the real data, one obtains likelihood (ratio) estimators which can be plugged into the Metropolis-Hastings algorithm. The resulting Markov chains generate, at a steady state, samples from an approximate posterior whose asymptotic properties we characterize. Drawing upon connections with empirical Bayes and Bayesian mis-specification, we quantify the convergence rate in terms of the contraction speed of the actual posterior and the convergence rate of the Classifier. Asymptotic normality results are also provided which justify inferential potential of our approach. We illustrate the usefulness of our approach on examples which have posed a challenge for existing Bayesian likelihood-free approaches.
Bayesian modelling and computational inference by Markov chain Monte Carlo (MCMC) is a principled framework for large-scale uncertainty quantification, though is limited in practice by computational cost when implemented in the simplest form that req uires simulating an accurate computer model at each iteration of the MCMC. The delayed acceptance Metropolis--Hastings MCMC leverages a reduced model for the forward map to lower the compute cost per iteration, though necessarily reduces statistical efficiency that can, without care, lead to no reduction in the computational cost of computing estimates to a desired accuracy. Randomizing the reduced model for the forward map can dramatically improve computational efficiency, by maintaining the low cost per iteration but also avoiding appreciable loss of statistical efficiency. Randomized maps are constructed by a posteriori adaptive tuning of a randomized and locally-corrected deterministic reduced model. Equivalently, the approximated posterior distribution may be viewed as induced by a modified likelihood function for use with the reduced map, with parameters tuned to optimize the quality of the approximation to the correct posterior distribution. Conditions for adaptive MCMC algorithms allow practical approximations and algorithms that have guaranteed ergodicity for the target distribution. Good statistical and computational efficiencies are demonstrated in examples of calibration of large-scale numerical models of geothermal reservoirs and electrical capacitance tomography.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا