No Arabic abstract
The variational quantum Monte Carlo (VQMC) method received significant attention in the recent past because of its ability to overcome the curse of dimensionality inherent in many-body quantum systems. Close parallels exist between VQMC and the emerging hybrid quantum-classical computational paradigm of variational quantum algorithms. VQMC overcomes the curse of dimensionality by performing alternating steps of Monte Carlo sampling from a parametrized quantum state followed by gradient-based optimization. While VQMC has been applied to solve high-dimensional problems, it is known to be difficult to parallelize, primarily owing to the Markov Chain Monte Carlo (MCMC) sampling step. In this work, we explore the scalability of VQMC when autoregressive models, with exact sampling, are used in place of MCMC. This approach can exploit distributed-memory, shared-memory and/or GPU parallelism in the sampling task without any bottlenecks. In particular, we demonstrate the GPU-scalability of VQMC for solving up to ten-thousand dimensional combinatorial optimization problems.
A notion of quantum natural evolution strategies is introduced, which provides a geometric synthesis of a number of known quantum/classical algorithms for performing classical black-box optimization. Recent work of Gomes et al. [2019] on heuristic combinatorial optimization using neural quantum states is pedagogically reviewed in this context, emphasizing the connection with natural evolution strategies. The algorithmic framework is illustrated for approximate combinatorial optimization problems, and a systematic strategy is found for improving the approximation ratios. In particular it is found that natural evolution strategies can achieve approximation ratios competitive with widely used heuristic algorithms for Max-Cut, at the expense of increased computation time.
The cavity method is a well established technique for solving classical spin models on sparse random graphs (mean-field models with finite connectivity). Laumann et al. [arXiv:0706.4391] proposed recently an extension of this method to quantum spin-1/2 models in a transverse field, using a discretized Suzuki-Trotter imaginary time formalism. Here we show how to take analytically the continuous imaginary time limit. Our main technical contribution is an explicit procedure to generate the spin trajectories in a path integral representation of the imaginary time dynamics. As a side result we also show how this procedure can be used in simple heat-bath like Monte Carlo simulations of generic quantum spin models. The replica symmetric continuous time quantum cavity method is formulated for a wide class of models, and applied as a simple example on the Bethe lattice ferromagnet in a transverse field. The results of the methods are confronted with various approximation schemes in this particular case. On this system we performed quantum Monte Carlo simulations that confirm the exactness of the cavity method in the thermodynamic limit.
The study of lattice gauge theories with Monte Carlo simulations is hindered by the infamous sign problem that appears under certain circumstances, in particular at non-zero chemical potential. So far, there is no universal method to overcome this problem. However, recent years brought a new class of non-perturbative Hamiltonian techniques named tensor networks, where the sign problem is absent. In previous work, we have demonstrated that this approach, in particular matrix product states in 1+1 dimensions, can be used to perform precise calculations in a lattice gauge theory, the massless and massive Schwinger model. We have computed the mass spectrum of this theory, its thermal properties and real-time dynamics. In this work, we review these results and we extend our calculations to the case of two flavours and non-zero chemical potential. We are able to reliably reproduce known analytical results for this model, thus demonstrating that tensor networks can tackle the sign problem of a lattice gauge theory at finite density.
Neural-network quantum states have shown great potential for the study of many-body quantum systems. In statistical machine learning, transfer learning designates protocols reusing features of a machine learning model trained for a problem to solve a possibly related but different problem. We propose to evaluate the potential of transfer learning to improve the scalability of neural-network quantum states. We devise and present physics-inspired transfer learning protocols, reusing the features of neural-network quantum states learned for the computation of the ground state of a small system for systems of larger sizes. We implement different protocols for restricted Boltzmann machines on general-purpose graphics processing units. This implementation alone yields a speedup over existing implementations on multi-core and distributed central processing units in comparable settings. We empirically and comparatively evaluate the efficiency (time) and effectiveness (accuracy) of different transfer learning protocols as we scale the system size in different models and different quantum phases. Namely, we consider both the transverse field Ising and Heisenberg XXZ models in one dimension, and also in two dimensions for the latter, with system sizes up to 128 and 8 x 8 spins. We empirically demonstrate that some of the transfer learning protocols that we have devised can be far more effective and efficient than starting from neural-network quantum states with randomly initialized parameters.
Population annealing is a recent addition to the arsenal of the practitioner in computer simulations in statistical physics and beyond that is found to deal well with systems with complex free-energy landscapes. Above all else, it promises to deliver unrivaled parallel scaling qualities, being suitable for parallel machines of the biggest calibre. Here we study population annealing using as the main example the two-dimensional Ising model which allows for particularly clean comparisons due to the available exact results and the wealth of published simulational studies employing other approaches. We analyze in depth the accuracy and precision of the method, highlighting its relation to older techniques such as simulated annealing and thermodynamic integration. We introduce intrinsic approaches for the analysis of statistical and systematic errors, and provide a detailed picture of the dependence of such errors on the simulation parameters. The results are benchmarked against canonical and parallel tempering simulations.