Do you want to publish a course? Click here

Solving frustrated quantum many-particle models with convolutional neural networks

78   0   0.0 ( 0 )
 Added by Lixin He
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Recently, there has been significant progress in solving quantum many-particle problem via machine learning based on the restricted Boltzmann machine. However, it is still highly challenging to solve frustrated models via machine learning, which has not been demonstrated so far. In this work, we design a brand new convolutional neural network (CNN) to solve such quantum many-particle problems. We demonstrate, for the first time, of solving the highly frustrated spin-1/2 J$_1$-J$_2$ antiferromagnetic Heisenberg model on square lattices via CNN. The energy per site achieved by the CNN is even better than previous string-bond-state calculations. Our work therefore opens up a new routine to solve challenging frustrated quantum many-particle problems using machine learning.



rate research

Read More

Neural networks have been used as variational wave functions for quantum many-particle problems. It has been shown that the correct sign structure is crucial to obtain the high accurate ground state energies. In this work, we propose a hybrid wave function combining the convolutional neural network (CNN) and projected entangled pair states (PEPS), in which the sign structures are determined by the PEPS, and the amplitudes of the wave functions are provided by CNN. We benchmark the ansatz on the highly frustrated spin-1/2 $J_1$-$J_2$ model. We show that the achieved ground energies are competitive to state-of-the-art results.
Motivated by the recent success of tensor networks to calculate the residual entropy of spin ice and kagome Ising models, we develop a general framework to study frustrated Ising models in terms of infinite tensor networks %, i.e. tensor networks that can be contracted using standard algorithms for infinite systems. This is achieved by reformulating the problem as local rules for configurations on overlapping clusters chosen in such a way that they relieve the frustration, i.e. that the energy can be minimized independently on each cluster. We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks, and we derive some basic rules and a linear program to implement them. We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions, vastly outperforming Monte Carlo methods in speed and accuracy. The extension to finite-temperature is briefly discussed.
126 - Lixin He , Hong An , Chao Yang 2018
The study of strongly frustrated magnetic systems has drawn great attentions from both theoretical and experimental physics. Efficient simulations of these models are essential for understanding their exotic properties. Here we present PEPS++, a novel computational paradigm for simulating frustrated magnetic systems and other strongly correlated quantum many-body systems. PEPS++ can accurately solve these models at the extreme scale with low cost and high scalability on modern heterogeneous supercomputers. We implement PEPS++ on Sunway TaihuLight based on a carefully designed tensor computation library for manipulating high-rank tensors and optimize it by invoking various high-performance matrix and tensor operations. By solving a 2D strongly frustrated $J_1$-$J_2$ model with over ten million cores, PEPS++ demonstrates the capability of simulating strongly correlated quantum many-body problems at unprecedented scales with accuracy and time-to-solution far beyond the previous state of the art.
333 - Jiaxin Wu , Wenjuan Zhang 2019
Solving ground states of quantum many-body systems has been a long-standing problem in condensed matter physics. Here, we propose a new unsupervised machine learning algorithm to find the ground state of a general quantum many-body system utilizing the benefits of artificial neural network. Without assuming the specific forms of the eigenvectors, this algorithm can find the eigenvectors in an unbiased way with well controlled accuracy. As examples, we apply this algorithm to 1D Ising and Heisenberg models, where the results match very well with exact diagonalization.
243 - Di Luo , Zhuo Chen , Kaiwen Hu 2021
Gauge invariance plays a crucial role in quantum mechanics from condensed matter physics to high energy physics. We develop an approach to constructing gauge invariant autoregressive neural networks for quantum lattice models. These networks can be efficiently sampled and explicitly obey gauge symmetries. We variationally optimize our gauge invariant autoregressive neural networks for ground states as well as real-time dynamics for a variety of models. We exactly represent the ground and excited states of the 2D and 3D toric codes, and the X-cube fracton model. We simulate the dynamics of the quantum link model of $text{U(1)}$ lattice gauge theory, obtain the phase diagram for the 2D $mathbb{Z}_2$ gauge theory, determine the phase transition and the central charge of the $text{SU(2)}_3$ anyonic chain, and also compute the ground state energy of the $text{SU(2)}$ invariant Heisenberg spin chain. Our approach provides powerful tools for exploring condensed matter physics, high energy physics and quantum information science.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا