We study the gauge cooling technique for the complex Langevin method applied to the computation in lattice quantum chromodynamics. We propose a new solver of the minimization problem that optimizes the gauge, which does not include any parameter in each iteration, and shows better performance than the classical gradient descent method especially when the lattice size is large. Two numerical tests are carried out to show the effectiveness of the new algorithm.
Recently there has been remarkable progress in the complex Langevin method, which aims at solving the complex action problem by complexifying the dynamical variables in the original path integral. In particular, a new technique called the gauge cooling was introduced and the full QCD simulation at finite density has been made possible in the high temperature (deconfined) phase or with heavy quarks. Here we provide a rigorous justification of the complex Langevin method including the gauge cooling procedure. We first show that the gauge cooling can be formulated as an extra term in the complex Langevin equation involving a gauge transformation parameter, which is chosen appropriately as a function of the configuration before cooling. The probability distribution of the complexified dynamical variables is modified by this extra term. However, this modification is shown not to affect the Fokker-Planck equation for the corresponding complex weight as far as observables are restricted to gauge invariant ones. Thus we demonstrate explicitly that the gauge cooling can be used as a viable technique to satisfy the convergence conditions for the complex Langevin method. We also discuss the gauge cooling in 0-dimensional systems such as vector models or matrix models.
We study a random matrix model for QCD at finite density via complex Langevin dynamics. This model has a phase transition to a phase with nonzero baryon density. We study the convergence of the algorithm as a function of the quark mass and the chemical potential and focus on two main observables: the baryon density and the chiral condensate. For simulations close to the chiral limit, the algorithm has wrong convergence properties when the quark mass is in the spectral domain of the Dirac operator. A possible solution of this problem is discussed.
Recently there has been remarkable progress in solving the sign problem, which occurs in investigating statistical systems with a complex weight. The two promising methods, the complex Langevin method and the Lefschetz thimble method, share the idea of complexifying the dynamical variables, but their relationship has not been clear. Here we propose a unified formulation, in which the sign problem is taken care of by both the Langevin dynamics and the holomorphic gradient flow. We apply our formulation to a simple model in three different ways and show that one of them interpolates the two methods by changing the flow time.
The complex Langevin method and the generalized Lefschetz-thimble method are two closely related approaches to the sign problem, which are both based on complexification of the original dynamical variables. The former can be viewed as a generalization of the stochastic quantization using the Langevin equation, whereas the latter is a deformation of the integration contour using the so-called holomorphic gradient flow. In order to clarify their relationship, we propose a formulation which combines the two methods by applying the former method to the real variables that parametrize the deformed integration contour in the latter method. Thr
The complex Langevin method (CLM) provides a promising way to perform the path integral with a complex action using a stochastic equation for complexified dynamical variables. It is known, however, that the method gives wrong results in some cases, while it works, for instance, in finite density QCD in the deconfinement phase or in the heavy dense limit. Here we revisit the argument for justification of the CLM and point out a subtlety in using the time-evolved observables, which play a crucial role in the argument. This subtlety requires that the probability distribution of the drift term should fall off exponentially or faster at large magnitude. We demonstrate our claim in some examples such as chiral Random Matrix Theory and show that our criterion is indeed useful in judging whether the results obtained by the CLM are trustable or not.