ترغب بنشر مسار تعليمي؟ اضغط هنا

The No-U-Turn Sampler as a Proposal Distribution in a Sequential Monte Carlo Sampler with a Near-Optimal L-Kernel

154   0   0.0 ( 0 )
 نشر من قبل Lee Devlin
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Markov Chain Monte Carlo (MCMC) is a powerful method for drawing samples from non-standard probability distributions and is utilized across many fields and disciplines. Methods such as Metropolis-Adjusted Langevin (MALA) and Hamiltonian Monte Carlo (HMC), which use gradient information to explore the target distribution, are popular variants of MCMC. The Sequential Monte Carlo (SMC) sampler is an alternative sampling method which, unlike MCMC, can readily utilise parallel computing architectures and also has tuning parameters not available to MCMC. One such parameter is the L-kernel which can be used to minimise the variance of the estimates from an SMC sampler. In this letter, we show how the proposal used in the No-U-Turn Sampler (NUTS), an advanced variant of HMC, can be incorporated into an SMC sampler to improve the efficiency of the exploration of the target space. We also show how the SMC sampler can be optimized using both a near-optimal L-kernel and a Hamiltonian proposal



قيم البحث

اقرأ أيضاً

Key to any cosmic microwave background (CMB) analysis is the separation of the CMB from foreground contaminants. In this paper we present a novel implementation of Bayesian CMB component separation. We sample from the full posterior distribution usin g the No-U-Turn Sampler (NUTS), a gradient based sampling algorithm. Alongside this, we introduce new foreground modelling approaches. We use the mean-shift algorithm to define regions on the sky, clustering according to naively estimated foreground spectral parameters. Over these regions we adopt a complete pooling model, where we assume constant spectral parameters, and a hierarchical model, where we model individual spectral parameters as being drawn from underlying hyper-distributions. We validate the algorithm against simulations of the LiteBIRD and C-BASS experiments, with an input tensor-to-scalar ratio of $r=5times 10^{-3}$. Considering multipoles $32leqellleq 121$, we are able to recover estimates for $r$. With LiteBIRD only observations, and using the complete pooling model, we recover $r=(10pm 0.6)times 10^{-3}$. For C-BASS and LiteBIRD observations we find $r=(7.0pm 0.6)times 10^{-3}$ using the complete pooling model, and $r=(5.0pm 0.4)times 10^{-3}$ using the hierarchical model. By adopting the hierarchical model we are able to eliminate biases in our cosmological parameter estimation, and obtain lower uncertainties due to the smaller Galactic emission mask that can be adopted for power spectrum estimation. Measured by the rate of effective sample generation, NUTS offers performance improvements of $sim10^3$ over using Metropolis-Hastings to fit the complete pooling model. The efficiency of NUTS allows us to fit the more sophisticated hierarchical foreground model, that would likely be intractable with non-gradient based sampling algorithms.
In this article, we derive a novel non-reversible, continuous-time Markov chain Monte Carlo (MCMC) sampler, called Coordinate Sampler, based on a piecewise deterministic Markov process (PDMP), which can be seen as a variant of the Zigzag sampler. In addition to proving a theoretical validation for this new sampling algorithm, we show that the Markov chain it induces exhibits geometrical ergodicity convergence, for distributions whose tails decay at least as fast as an exponential distribution and at most as fast as a Gaussian distribution. Several numerical examples highlight that our coordinate sampler is more efficient than the Zigzag sampler, in terms of effective sample size.
The Bouncy Particle Sampler is a Markov chain Monte Carlo method based on a nonreversible piecewise deterministic Markov process. In this scheme, a particle explores the state space of interest by evolving according to a linear dynamics which is alte red by bouncing on the hyperplane tangent to the gradient of the negative log-target density at the arrival times of an inhomogeneous Poisson Process (PP) and by randomly perturbing its velocity at the arrival times of an homogeneous PP. Under regularity conditions, we show here that the process corresponding to the first component of the particle and its corresponding velocity converges weakly towards a Randomized Hamiltonian Monte Carlo (RHMC) process as the dimension of the ambient space goes to infinity. RHMC is another piecewise deterministic non-reversible Markov process where a Hamiltonian dynamics is altered at the arrival times of a homogeneous PP by randomly perturbing the momentum component. We then establish dimension-free convergence rates for RHMC for strongly log-concave targets with bounded Hessians using coupling ideas and hypocoercivity techniques.
The self-learning Metropolis-Hastings algorithm is a powerful Monte Carlo method that, with the help of machine learning, adaptively generates an easy-to-sample probability distribution for approximating a given hard-to-sample distribution. This pape r provides a new self-learning Monte Carlo method that utilizes a quantum computer to output a proposal distribution. In particular, we show a novel subclass of this general scheme based on the quantum Fourier transform circuit; this sampler is classically simulable while having a certain advantage over conventional methods. The performance of this quantum inspired algorithm is demonstrated by some numerical simulations.
Markov Chain Monte Carlo (MCMC) methods sample from unnormalized probability distributions and offer guarantees of exact sampling. However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MC MC methods. Augmenting samplers with neural networks can potentially improve their efficiency. Previous neural network based samplers were trained with objectives that either did not explicitly encourage exploration, or used a L2 jump objective which could only be applied to well structured distributions. Thus it seems promising to instead maximize the proposal entropy for adapting the proposal to distributions of any shape. To allow direct optimization of the proposal entropy, we propose a neural network MCMC sampler that has a flexible and tractable proposal distribution. Specifically, our network architecture utilizes the gradient of the target distribution for generating proposals. Our model achieves significantly higher efficiency than previous neural network MCMC techniques in a variety of sampling tasks. Further, the sampler is applied on training of a convergent energy-based model of natural images. The adaptive sampler achieves unbiased sampling with significantly higher proposal entropy than Langevin dynamics sampler.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا