ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning and Variational Algorithms for Lattice Field Theory

90   0   0.0 ( 0 )
 نشر من قبل Gurtej Kanwar
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Gurtej Kanwar




اسأل ChatGPT حول البحث

In lattice quantum field theory studies, parameters defining the lattice theory must be tuned toward criticality to access continuum physics. Commonly used Markov chain Monte Carlo (MCMC) methods suffer from critical slowing down in this limit, restricting the precision of continuum extrapolations. Further difficulties arise when measuring correlation functions of operators widely separated in spacetime: for most correlation functions, an exponentially severe signal-to-noise problem is encountered as the operators are taken to be widely separated. This dissertation details two new techniques to address these issues. First, we define a novel MCMC algorithm based on generative flow-based models. Such models utilize machine learning methods to describe efficient approximate samplers for distributions of interest. Independently drawn flow-based samples are then used as proposals in an asymptotically exact Metropolis-Hastings Markov chain. We address incorporating symmetries of interest, including translational and gauge symmetries. We secondly introduce an approach to deform Monte Carlo estimators based on contour deformations applied to the domain of the path integral. The deformed estimators associated with an observable give equivalent unbiased measurements of that observable, but generically have different variances. We define families of deformed manifolds for lattice gauge theories and introduce methods to efficiently optimize the choice of manifold (the observifold), minimizing the deformed observable variance. Finally, we demonstrate that flow-based MCMC can mitigate critical slowing down and observifolds can exponentially reduce variance in proof-of-principle applications to scalar $phi^4$ theory and $mathrm{U}(1)$ and $mathrm{SU}(N)$ lattice gauge theories.



قيم البحث

اقرأ أيضاً

144 - S. Foreman , J. Giedt , Y. Meurice 2017
Machine learning has been a fast growing field of research in several areas dealing with large datasets. We report recent attempts to use Renormalization Group (RG) ideas in the context of machine learning. We examine coarse graining procedures for p erceptron models designed to identify the digits of the MNIST data. We discuss the correspondence between principal components analysis (PCA) and RG flows across the transition for worm configurations of the 2D Ising model. Preliminary results regarding the logarithmic divergence of the leading PCA eigenvalue were presented at the conference and have been improved after. More generally, we discuss the relationship between PCA and observables in Monte Carlo simulations and the possibility of reduction of the number of learning parameters in supervised learning based on RG inspired hierarchical ansatzes.
A novel technique using machine learning (ML) to reduce the computational cost of evaluating lattice quantum chromodynamics (QCD) observables is presented. The ML is trained on a subset of background gauge field configurations, called the labeled set , to predict an observable $O$ from the values of correlated, but less compute-intensive, observables $mathbf{X}$ calculated on the full sample. By using a second subset, also part of the labeled set, we estimate the bias in the result predicted by the trained ML algorithm. A reduction in the computational cost by about $7%-38%$ is demonstrated for two different lattice QCD calculations using the Boosted decision tree regression ML algorithm: (1) prediction of the nucleon three-point correlation functions that yield isovector charges from the two-point correlation functions, and (2) prediction of the phase acquired by the neutron mass when a small Charge-Parity (CP) violating interaction, the quark chromoelectric dipole moment interaction, is added to QCD, again from the two-point correlation functions calculated without CP violation.
The performance of modern machine learning methods highly depends on their hyperparameter configurations. One simple way of selecting a configuration is to use default settings, often proposed along with the publication and implementation of a new al gorithm. Those default values are usually chosen in an ad-hoc manner to work good enough on a wide variety of datasets. To address this problem, different automatic hyperparameter configuration algorithms have been proposed, which select an optimal configuration per dataset. This principled approach usually improves performance but adds additional algorithmic complexity and computational costs to the training procedure. As an alternative to this, we propose learning a set of complementary default values from a large database of prior empirical results. Selecting an appropriate configuration on a new dataset then requires only a simple, efficient and embarrassingly parallel search over this set. We demonstrate the effectiveness and efficiency of the approach we propose in comparison to random search and Bayesian Optimization.
Lattice calculations using the framework of effective field theory have been applied to a wide range few-body and many-body systems. One of the challenges of these calculations is to remove systematic errors arising from the nonzero lattice spacing. Fortunately, the lattice improvement program pioneered by Symanzik provides a formalism for doing this. While lattice improvement has already been utilized in lattice effective field theory calculations, the effectiveness of the improvement program has not been systematically benchmarked. In this work we use lattice improvement to remove lattice errors for a one-dimensional system of bosons with zero-range interactions. We construct the improved lattice action up to next-to-next-to-leading order and verify that the remaining errors scale as the fourth power of the lattice spacing for observables involving as many as five particles. Our results provide a guide for increasing the accuracy of future calculations in lattice effective field theory with improved lattice actions.
Inspired by the duality between gravity and defects in crystals, we study lattice field theory with torsion. The torsion is realized by a line defect of a lattice, namely a dislocation. As the first application, we perform the numerical computation f or vector and axial currents induced by a screw dislocation. This current generation is called the chiral torsional effect. We also derive the analytical formula for the chiral torsional effect in the continuum limit.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا