ترغب بنشر مسار تعليمي؟ اضغط هنا

Metropolis Sampling

75   0   0.0 ( 0 )
 نشر من قبل Luca Martino
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techniques, introducing the basic notions and different properties. We describe in details all the elements involved in the MH algorithm and the most relevant variants. Several improvements and recent extensions proposed in the literature are also briefly discussed, providing a quick but exhaustive overview of the current Metropolis-based samplings world.

قيم البحث

اقرأ أيضاً

We consider the problem of sampling from a strongly log-concave density in $mathbb{R}^d$, and prove a non-asymptotic upper bound on the mixing time of the Metropolis-adjusted Langevin algorithm (MALA). The method draws samples by simulating a Markov chain obtained from the discretization of an appropriate Langevin diffusion, combined with an accept-reject step. Relative to known guarantees for the unadjusted Langevin algorithm (ULA), our bounds show that the use of an accept-reject step in MALA leads to an exponentially improved dependence on the error-tolerance. Concretely, in order to obtain samples with TV error at most $delta$ for a density with condition number $kappa$, we show that MALA requires $mathcal{O} big(kappa d log(1/delta) big)$ steps, as compared to the $mathcal{O} big(kappa^2 d/delta^2 big)$ steps established in past work on ULA. We also demonstrate the gains of MALA over ULA for weakly log-concave densities. Furthermore, we derive mixing time bounds for the Metropolized random walk (MRW) and obtain $mathcal{O}(kappa)$ mixing time slower than MALA. We provide numerical examples that support our theoretical findings, and demonstrate the benefits of Metropolis-Hastings adjustment for Langevin-type sampling algorithms.
The original motivation to build a quantum computer came from Feynman who envisaged a machine capable of simulating generic quantum mechanical systems, a task that is believed to be intractable for classical computers. Such a machine would have a wid e range of applications in the simulation of many-body quantum physics, including condensed matter physics, chemistry, and high energy physics. Part of Feynmans challenge was met by Lloyd who showed how to approximately decompose the time-evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that basically acquired a monopoly for the simulation of interacting particles. Here, we demonstrate how to implement a quantum version of the Metropolis algorithm on a quantum computer. This algorithm permits to sample directly from the eigenstates of the Hamiltonian and thus evades the sign problem present in classical simulations. A small scale implementation of this algorithm can already be achieved with todays technology
We propose and analyze a generalized splitting method to sample approximately from a distribution conditional on the occurrence of a rare event. This has important applications in a variety of contexts in operations research, engineering, and computa tional statistics. The method uses independent trials starting from a single particle. We exploit this independence to obtain asymptotic and non-asymptotic bounds on the total variation error of the sampler. Our main finding is that the approximation error depends crucially on the relative variability of the number of points produced by the splitting algorithm in one run, and that this relative variability can be readily estimated via simulation. We illustrate the relevance of the proposed method on an application in which one needs to sample (approximately) from an intractable posterior density in Bayesian inference.
Under measurement constraints, responses are expensive to measure and initially unavailable on most of records in the dataset, but the covariates are available for the entire dataset. Our goal is to sample a relatively small portion of the dataset wh ere the expensive responses will be measured and the resultant sampling estimator is statistically efficient. Measurement constraints require the sampling probabilities can only depend on a very small set of the responses. A sampling procedure that uses responses at most only on a small pilot sample will be called response-free. We propose a response-free sampling procedure mbox{(OSUMC)} for generalized linear models (GLMs). Using the A-optimality criterion, i.e., the trace of the asymptotic variance, the resultant estimator is statistically efficient within a class of sampling estimators. We establish the unconditional asymptotic distribution of a general class of response-free sampling estimators. This result is novel compared with the existing conditional results obtained by conditioning on both covariates and responses. Under our unconditional framework, the subsamples are no longer independent and new martingale techniques are developed for our asymptotic theory. We further derive the A-optimal response-free sampling distribution. Since this distribution depends on population level quantities, we propose the Optimal Sampling Under Measurement Constraints (OSUMC) algorithm to approximate the theoretical optimal sampling. Finally, we conduct an intensive empirical study to demonstrate the advantages of OSUMC algorithm over existing methods in both statistical and computational perspectives.
106 - James Scott , Axel Gandy 2018
This paper introduces new efficient algorithms for two problems: sampling conditional on vertex degrees in unweighted graphs, and sampling conditional on vertex strengths in weighted graphs. The algorithms can sample conditional on the presence or ab sence of an arbitrary number of edges. The resulting conditional distributions provide the basis for exact tests. Existing samplers based on MCMC or sequential importance sampling are generally not scalable; their efficiency degrades in sparse graphs. MCMC methods usually require explicit computation of a Markov basis to navigate the complex state space; this is computationally intensive even for small graphs. We use state-dependent kernel selection to develop new MCMC samplers. These do not require a Markov basis, and are efficient both in sparse and dense graphs. The key idea is to intelligently select a Markov kernel on the basis of the current state of the chain. We apply our methods to testing hypotheses on a real network and contingency table. The algorithms appear orders of magnitude more efficient than existing methods in the test cases considered.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا