ﻻ يوجد ملخص باللغة العربية
Efficient sampling of complex high-dimensional probability densities is a central task in computational science. Machine Learning techniques based on autoregressive neural networks have been recently shown to provide good approximations of probability distributions of interest in physics. In this work, we propose a systematic way to remove the intrinsic bias associated with these variational approximations, combining it with Markov-chain Monte Carlo in an automatic scheme to efficiently generate cluster updates, which is particularly useful for models for which no efficient cluster update scheme is known. Our approach is based on symmetry-enforced cluster updates building on the neural-network representation of conditional probabilities. We demonstrate that such finite-cluster updates are crucial to circumvent ergodicity problems associated with global neural updates. We test our method for first- and second-order phase transitions in classical spin systems, proving in particular its viability for critical systems, or in the presence of metastable states.
We propose a method for solving statistical mechanics problems defined on sparse graphs. It extracts a small Feedback Vertex Set (FVS) from the sparse graph, converting the sparse system to a much smaller system with many-body and dense interactions
We design generative neural networks that generate Monte Carlo configurations with complete absence of autocorrelation and from which direct measurements of physical observables can be employed, irrespective of the system locating at the classical cr
Population annealing is a recent addition to the arsenal of the practitioner in computer simulations in statistical physics and beyond that is found to deal well with systems with complex free-energy landscapes. Above all else, it promises to deliver
Gauge invariance plays a crucial role in quantum mechanics from condensed matter physics to high energy physics. We develop an approach to constructing gauge invariant autoregressive neural networks for quantum lattice models. These networks can be e
We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximiz