Do you want to publish a course? Click here

Curvature and Concentration of Hamiltonian Monte Carlo in High Dimensions

136   0   0.0 ( 0 )
 Publication date 2014
  fields
and research's language is English




Ask ChatGPT about the research

In this article, we analyze Hamiltonian Monte Carlo (HMC) by placing it in the setting of Riemannian geometry using the Jacobi metric, so that each step corresponds to a geodesic on a suitable Riemannian manifold. We then combine the notion of curvature of a Markov chain due to Joulin and Ollivier with the classical sectional curvature from Riemannian geometry to derive error bounds for HMC in important cases, where we have positive curvature. These cases include several classical distributions such as multivariate Gaussians, and also distributions arising in the study of Bayesian image registration. The theoretical development suggests the sectional curvature as a new diagnostic tool for convergence for certain Markov chains.



rate research

Read More

436 - Jakiw Pidstrigach 2020
In this article, we consider the preconditioned Hamiltonian Monte Carlo (pHMC) algorithm defined directly on an infinite-dimensional Hilbert space. In this context, and under a condition reminiscent of strong log-concavity of the target measure, we prove convergence bounds for adjusted pHMC in the standard 1-Wasserstein distance. The arguments rely on a synchronous coupling of two copies of pHMC, which is controlled by adapting elements from arXiv:1805.00452.
The iterated conditional sequential Monte Carlo (i-CSMC) algorithm from Andrieu, Doucet and Holenstein (2010) is an MCMC approach for efficiently sampling from the joint posterior distribution of the $T$ latent states in challenging time-series models, e.g. in non-linear or non-Gaussian state-space models. It is also the main ingredient in particle Gibbs samplers which infer unknown model parameters alongside the latent states. In this work, we first prove that the i-CSMC algorithm suffers from a curse of dimension in the dimension of the states, $D$: it breaks down unless the number of samples (particles), $N$, proposed by the algorithm grows exponentially with $D$. Then, we present a novel local version of the algorithm which proposes particles using Gaussian random-walk moves that are suitably scaled with $D$. We prove that this iterated random-walk conditional sequential Monte Carlo (i-RW-CSMC) algorithm avoids the curse of dimension: for arbitrary $N$, its acceptance rates and expected squared jumping distance converge to non-trivial limits as $D to infty$. If $T = N = 1$, our proposed algorithm reduces to a Metropolis--Hastings or Barkers algorithm with Gaussian random-walk moves and we recover the well known scaling limits for such algorithms.
Probabilistic programming uses programs to express generative models whose posterior probability is then computed by built-in inference engines. A challenging goal is to develop general purpose inference algorithms that work out-of-the-box for arbitrary programs in a universal probabilistic programming language (PPL). The densities defined by such programs, which may use stochastic branching and recursion, are (in general) nonparametric, in the sense that they correspond to models on an infinite-dimensional parameter space. However standard inference algorithms, such as the Hamiltonian Monte Carlo (HMC) algorithm, target distributions with a fixed number of parameters. This paper introduces the Nonparametric Hamiltonian Monte Carlo (NP-HMC) algorithm which generalises HMC to nonparametric models. Inputs to NP-HMC are a new class of measurable functions called tree representable, which serve as a language-independent representation of the density functions of probabilistic programs in a universal PPL. We provide a correctness proof of NP-HMC, and empirically demonstrate significant performance improvements over existing approaches on several nonparametric examples.
139 - Zhixin Zhou , Yizhe Zhu 2019
We prove a non-asymptotic concentration inequality for the spectral norm of sparse inhomogeneous random tensors with Bernoulli entries. For an order-$k$ inhomogeneous random tensor $T$ with sparsity $p_{max}geq frac{clog n}{n }$, we show that $|T-mathbb E T|=O(sqrt{n p_{max}}log^{k-2}(n))$ with high probability. The optimality of this bound up to polylog factors is provided by an information theoretic lower bound. By tensor unfolding, we extend the range of sparsity to $p_{max}geq frac{clog n}{n^{m}}$ with $1leq mleq k-1$ and obtain concentration inequalities for different sparsity regimes. We also provide a simple way to regularize $T$ such that $O(sqrt{n^{m}p_{max}})$ concentration still holds down to sparsity $p_{max}geq frac{c}{n^{m}}$ with $k/2leq mleq k-1$. We present our concentration and regularization results with two applications: (i) a randomized construction of hypergraphs of bounded degrees with good expander mixing properties, (ii) concentration of sparsified tensors under uniform sampling.
We present a method for performing Hamiltonian Monte Carlo that largely eliminates sample rejection for typical hyperparameters. In situations that would normally lead to rejection, instead a longer trajectory is computed until a new state is reached that can be accepted. This is achieved using Markov chain transitions that satisfy the fixed point equation, but do not satisfy detailed balance. The resulting algorithm significantly suppresses the random walk behavior and wasted function evaluations that are typically the consequence of update rejection. We demonstrate a greater than factor of two improvement in mixing time on three test problems. We release the source code as Python and MATLAB packages.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا