Do you want to publish a course? Click here

Flow-based sampling for fermionic lattice field theories

115   0   0.0 ( 0 )
 Added by Michael Albergo
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Algorithms based on normalizing flows are emerging as promising machine learning approaches to sampling complicated probability distributions in a way that can be made asymptotically exact. In the context of lattice field theory, proof-of-principle studies have demonstrated the effectiveness of this approach for scalar theories, gauge theories, and statistical systems. This work develops approaches that enable flow-based sampling of theories with dynamical fermions, which is necessary for the technique to be applied to lattice field theory studies of the Standard Model of particle physics and many condensed matter systems. As a practical demonstration, these methods are applied to the sampling of field configurations for a two-dimensional theory of massless staggered fermions coupled to a scalar field via a Yukawa interaction.



rate research

Read More

Recent results have demonstrated that samplers constructed with flow-based generative models are a promising new approach for configuration generation in lattice field theory. In this paper, we present a set of methods to construct flow models for targets with multiple separated modes (i.e. theories with multiple vacua). We demonstrate the application of these methods to modeling two-dimensional real scalar field theory in its symmetry-broken phase. In this context we investigate the performance of different flow-based sampling algorithms, including a composite sampling algorithm where flow-based proposals are occasionally augmented by applying updates using traditional algorithms like HMC.
We define a class of machine-learned flow-based sampling algorithms for lattice gauge theories that are gauge-invariant by construction. We demonstrate the application of this framework to U(1) gauge theory in two spacetime dimensions, and find that near critical points in parameter space the approach is orders of magnitude more efficient at sampling topological quantities than more traditional sampling procedures such as Hybrid Monte Carlo and Heat Bath.
This notebook tutorial demonstrates a method for sampling Boltzmann distributions of lattice field theories using a class of machine learning models known as normalizing flows. The ideas and approaches proposed in arXiv:1904.12072, arXiv:2002.02428, and arXiv:2003.06413 are reviewed and a concrete implementation of the framework is presented. We apply this framework to a lattice scalar field theory and to U(1) gauge theory, explicitly encoding gauge symmetries in the flow-based approach to the latter. This presentation is intended to be interactive and working with the attached Jupyter notebook is recommended.
We study perturbations that break gauge symmetries in lattice gauge theories. As a paradigmatic model, we consider the three-dimensional Abelian-Higgs (AH) model with an N-component scalar field and a noncompact gauge field, which is invariant under U(1) gauge and SU(N) transformations. We consider gauge-symmetry breaking perturbations that are quadratic in the gauge field, such as a photon mass term, and determine their effect on the critical behavior of the gauge-invariant model, focusing mainly on the continuous transitions associated with the charged fixed point of the AH field theory. We discuss their relevance and compute the (gauge-dependent) exponents that parametrize the departure from the critical behavior (continuum limit) of the gauge-invariant model. We also address the critical behavior of lattice AH models with broken gauge symmetry, showing an effective enlargement of the global symmetry, from U(N) to O(2N), which reflects a peculiar cyclic renormalization-group flow in the space of the lattice AH parameters and of the photon mass.
144 - S. Foreman , J. Giedt , Y. Meurice 2017
Machine learning has been a fast growing field of research in several areas dealing with large datasets. We report recent attempts to use Renormalization Group (RG) ideas in the context of machine learning. We examine coarse graining procedures for perceptron models designed to identify the digits of the MNIST data. We discuss the correspondence between principal components analysis (PCA) and RG flows across the transition for worm configurations of the 2D Ising model. Preliminary results regarding the logarithmic divergence of the leading PCA eigenvalue were presented at the conference and have been improved after. More generally, we discuss the relationship between PCA and observables in Monte Carlo simulations and the possibility of reduction of the number of learning parameters in supervised learning based on RG inspired hierarchical ansatzes.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا