No Arabic abstract
Neural networks have been used as variational wave functions for quantum many-particle problems. It has been shown that the correct sign structure is crucial to obtain the high accurate ground state energies. In this work, we propose a hybrid wave function combining the convolutional neural network (CNN) and projected entangled pair states (PEPS), in which the sign structures are determined by the PEPS, and the amplitudes of the wave functions are provided by CNN. We benchmark the ansatz on the highly frustrated spin-1/2 $J_1$-$J_2$ model. We show that the achieved ground energies are competitive to state-of-the-art results.
Variational methods have proven to be excellent tools to approximate ground states of complex many body Hamiltonians. Generic tools like neural networks are extremely powerful, but their parameters are not necessarily physically motivated. Thus, an efficient parametrization of the wave-function can become challenging. In this letter we introduce a neural-network based variational ansatz that retains the flexibility of these generic methods while allowing for a tunability with respect to the relevant correlations governing the physics of the system. We illustrate the success of this approach on topological, long-range correlated and frustrated models. Additionally, we introduce compatible variational optimization methods for exploration of low-lying excited states without symmetries that preserve the interpretability of the ansatz.
Neural-network quantum states have shown great potential for the study of many-body quantum systems. In statistical machine learning, transfer learning designates protocols reusing features of a machine learning model trained for a problem to solve a possibly related but different problem. We propose to evaluate the potential of transfer learning to improve the scalability of neural-network quantum states. We devise and present physics-inspired transfer learning protocols, reusing the features of neural-network quantum states learned for the computation of the ground state of a small system for systems of larger sizes. We implement different protocols for restricted Boltzmann machines on general-purpose graphics processing units. This implementation alone yields a speedup over existing implementations on multi-core and distributed central processing units in comparable settings. We empirically and comparatively evaluate the efficiency (time) and effectiveness (accuracy) of different transfer learning protocols as we scale the system size in different models and different quantum phases. Namely, we consider both the transverse field Ising and Heisenberg XXZ models in one dimension, and also in two dimensions for the latter, with system sizes up to 128 and 8 x 8 spins. We empirically demonstrate that some of the transfer learning protocols that we have devised can be far more effective and efficient than starting from neural-network quantum states with randomly initialized parameters.
Recently, there has been significant progress in solving quantum many-particle problem via machine learning based on the restricted Boltzmann machine. However, it is still highly challenging to solve frustrated models via machine learning, which has not been demonstrated so far. In this work, we design a brand new convolutional neural network (CNN) to solve such quantum many-particle problems. We demonstrate, for the first time, of solving the highly frustrated spin-1/2 J$_1$-J$_2$ antiferromagnetic Heisenberg model on square lattices via CNN. The energy per site achieved by the CNN is even better than previous string-bond-state calculations. Our work therefore opens up a new routine to solve challenging frustrated quantum many-particle problems using machine learning.
Pursuing fractionalized particles that do not bear properties of conventional measurable objects, exemplified by bare particles in the vacuum such as electrons and elementary excitations such as magnons, is a challenge in physics. Here we show that a machine-learning method for quantum many-body systems that has achieved state-of-the-art accuracy reveals the existence of a quantum spin liquid (QSL) phase in the region $0.49lesssim J_2/J_1lesssim0.54$ convincingly in spin-1/2 frustrated Heisenberg model with the nearest and next-nearest neighbor exchanges, $J_1$ and $J_2$, respectively, on the square lattice. This is achieved by combining with the cutting-edge computational schemes known as the correlation ratio and level spectroscopy methods to mitigate the finite-size effects. The quantitative one-to-one correspondence between the correlations in the ground state and the excitation spectra enables the reliable identification and estimation of the QSL and its nature. The spin excitation spectra containing both singlet and triplet gapless Dirac-like dispersions signal the emergence of gapless fractionalized spin-1/2 Dirac-type spinons in the distinctive QSL phase. Unexplored critical behavior with coexisting and dual power-law decays of N{e}el antiferromagnetic and dimer correlations is revealed. The power-law decay exponents of the two correlations differently vary with $J_2/J_1$ in the QSL phase and thus have different values except for a single point satisfying the symmetry of the two correlations. The isomorph of excitations with the cuprate $d$-wave superconductors implies a tight connection between the present QSL and superconductivity. This achievement demonstrates that the quantum-state representation using machine learning techniques, which had mostly been limited to benchmarks, is a promising tool for investigating grand challenges in quantum many-body physics.
Neural-network quantum states (NQS) have been shown to be a suitable variational ansatz to simulate out-of-equilibrium dynamics in two-dimensional systems using time-dependent variational Monte Carlo (t-VMC). In particular, stable and accurate time propagation over long time scales has been observed in the square-lattice Heisenberg model using the Restricted Boltzmann machine architecture. However, achieving similar performance in other systems has proven to be more challenging. In this article, we focus on the two-leg Heisenberg ladder driven out of equilibrium by a pulsed excitation as a benchmark system. We demonstrate that unmitigated noise is strongly amplified by the nonlinear equations of motion for the network parameters, which by itself is sufficient to cause numerical instabilities in the time-evolution. As a consequence, the achievable accuracy of the simulated dynamics is a result of the interplay between network expressiveness and regularization required to remedy these instabilities. Inspired by machine learning practice, we propose a validation-set based diagnostic tool to help determining the optimal regularization hyperparameters for t-VMC based propagation schemes. For our benchmark, we show that stable and accurate time propagation can be achieved in regimes of sufficiently regularized variational dynamics.