Do you want to publish a course? Click here

Unbiased Monte Carlo Cluster Updates with Autoregressive Neural Networks

157   0   0.0 ( 0 )
 Added by Dian Wu
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Efficient sampling of complex high-dimensional probability densities is a central task in computational science. Machine Learning techniques based on autoregressive neural networks have been recently shown to provide good approximations of probability distributions of interest in physics. In this work, we propose a systematic way to remove the intrinsic bias associated with these variational approximations, combining it with Markov-chain Monte Carlo in an automatic scheme to efficiently generate cluster updates, which is particularly useful for models for which no efficient cluster update scheme is known. Our approach is based on symmetry-enforced cluster updates building on the neural-network representation of conditional probabilities. We demonstrate that such finite-cluster updates are crucial to circumvent ergodicity problems associated with global neural updates. We test our method for first- and second-order phase transitions in classical spin systems, proving in particular its viability for critical systems, or in the presence of metastable states.



rate research

Read More

We propose a method for solving statistical mechanics problems defined on sparse graphs. It extracts a small Feedback Vertex Set (FVS) from the sparse graph, converting the sparse system to a much smaller system with many-body and dense interactions with an effective energy on every configuration of the FVS, then learns a variational distribution parameterized using neural networks to approximate the original Boltzmann distribution. The method is able to estimate free energy, compute observables, and generate unbiased samples via direct sampling without auto-correlation. Extensive experiments show that our approach is more accurate than existing approaches for sparse spin glasses. On random graphs and real-world networks, our approach significantly outperforms the standard methods for sparse systems such as the belief-propagation algorithm; on structured sparse systems such as two-dimensional lattices our approach is significantly faster and more accurate than recently proposed variational autoregressive networks using convolution neural networks.
113 - Hongyu Lu , Chuhao Li , Wei Li 2021
We design generative neural networks that generate Monte Carlo configurations with complete absence of autocorrelation and from which direct measurements of physical observables can be employed, irrespective of the system locating at the classical critical point, fermionic Mott insulator, Dirac semimetal and quantum critical point. We further propose a generic parallel-chain Monte Carlo scheme based on such neural networks, which provides independent samplings and accelerates the Monte Carlo simulations by reducing the thermalization process. We demonstrate the performance of our approach on the two-dimensional Ising and fermion Hubbard models.
Population annealing is a recent addition to the arsenal of the practitioner in computer simulations in statistical physics and beyond that is found to deal well with systems with complex free-energy landscapes. Above all else, it promises to deliver unrivaled parallel scaling qualities, being suitable for parallel machines of the biggest calibre. Here we study population annealing using as the main example the two-dimensional Ising model which allows for particularly clean comparisons due to the available exact results and the wealth of published simulational studies employing other approaches. We analyze in depth the accuracy and precision of the method, highlighting its relation to older techniques such as simulated annealing and thermodynamic integration. We introduce intrinsic approaches for the analysis of statistical and systematic errors, and provide a detailed picture of the dependence of such errors on the simulation parameters. The results are benchmarked against canonical and parallel tempering simulations.
243 - Di Luo , Zhuo Chen , Kaiwen Hu 2021
Gauge invariance plays a crucial role in quantum mechanics from condensed matter physics to high energy physics. We develop an approach to constructing gauge invariant autoregressive neural networks for quantum lattice models. These networks can be efficiently sampled and explicitly obey gauge symmetries. We variationally optimize our gauge invariant autoregressive neural networks for ground states as well as real-time dynamics for a variety of models. We exactly represent the ground and excited states of the 2D and 3D toric codes, and the X-cube fracton model. We simulate the dynamics of the quantum link model of $text{U(1)}$ lattice gauge theory, obtain the phase diagram for the 2D $mathbb{Z}_2$ gauge theory, determine the phase transition and the central charge of the $text{SU(2)}_3$ anyonic chain, and also compute the ground state energy of the $text{SU(2)}$ invariant Heisenberg spin chain. Our approach provides powerful tools for exploring condensed matter physics, high energy physics and quantum information science.
We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. We release an open source TensorFlow implementation of the algorithm.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا