No Arabic abstract
The complex Langevin method and the generalized Lefschetz-thimble method are two closely related approaches to the sign problem, which are both based on complexification of the original dynamical variables. The former can be viewed as a generalization of the stochastic quantization using the Langevin equation, whereas the latter is a deformation of the integration contour using the so-called holomorphic gradient flow. In order to clarify their relationship, we propose a formulation which combines the two methods by applying the former method to the real variables that parametrize the deformed integration contour in the latter method. Thr
Recently there has been remarkable progress in solving the sign problem, which occurs in investigating statistical systems with a complex weight. The two promising methods, the complex Langevin method and the Lefschetz thimble method, share the idea of complexifying the dynamical variables, but their relationship has not been clear. Here we propose a unified formulation, in which the sign problem is taken care of by both the Langevin dynamics and the holomorphic gradient flow. We apply our formulation to a simple model in three different ways and show that one of them interpolates the two methods by changing the flow time.
Recently, we have proposed a novel approach (arxiv:1205.3996) to deal with the sign problem that hinders Monte Carlo simulations of many quantum field theories (QFTs). The approach consists in formulating the QFT on a Lefschetz thimble. In this paper we concentrate on the application to a scalar field theory with a sign problem. In particular, we review the formulation and the justification of the approach, and we also describe the Aurora Monte Carlo algorithm that we are currently testing.
Recently there has been remarkable progress in the complex Langevin method, which aims at solving the complex action problem by complexifying the dynamical variables in the original path integral. In particular, a new technique called the gauge cooling was introduced and the full QCD simulation at finite density has been made possible in the high temperature (deconfined) phase or with heavy quarks. Here we provide a rigorous justification of the complex Langevin method including the gauge cooling procedure. We first show that the gauge cooling can be formulated as an extra term in the complex Langevin equation involving a gauge transformation parameter, which is chosen appropriately as a function of the configuration before cooling. The probability distribution of the complexified dynamical variables is modified by this extra term. However, this modification is shown not to affect the Fokker-Planck equation for the corresponding complex weight as far as observables are restricted to gauge invariant ones. Thus we demonstrate explicitly that the gauge cooling can be used as a viable technique to satisfy the convergence conditions for the complex Langevin method. We also discuss the gauge cooling in 0-dimensional systems such as vector models or matrix models.
In recent years, there has been remarkable progress in theoretical justification of the complex Langevin method, which is a promising method for evading the sign problem in the path integral with a complex weight. There still remains, however, an issue concerning occasional failure of this method in the case where the action involves logarithmic singularities such as the one appearing from the fermion determinant in finite density QCD. In this talk, we point out that this failure is due to the breakdown of the relation between the complex weight which satisfies the Fokker-Planck equation and the probability distribution generated by the stochastic process. In fact, this kind of failure can occur in general when the stochastic process involves a singular drift term. We show, however, in simple examples, that there exists a parameter region in which the method works although the standard reweighting method is hardly applicable.
The complex Langevin method (CLM) provides a promising way to perform the path integral with a complex action using a stochastic equation for complexified dynamical variables. It is known, however, that the method gives wrong results in some cases, while it works, for instance, in finite density QCD in the deconfinement phase or in the heavy dense limit. Here we revisit the argument for justification of the CLM and point out a subtlety in using the time-evolved observables, which play a crucial role in the argument. This subtlety requires that the probability distribution of the drift term should fall off exponentially or faster at large magnitude. We demonstrate our claim in some examples such as chiral Random Matrix Theory and show that our criterion is indeed useful in judging whether the results obtained by the CLM are trustable or not.