ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Tensor Network ansatz for high-dimensional quantum many-body problems

79   0   0.0 ( 0 )
 نشر من قبل Simone Notarnicola
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a novel tensor network structure augmenting the well-established Tree Tensor Network representation of a quantum many-body wave function. The new structure satisfies the area law in high dimensions remaining efficiently manipulatable and scalable. We benchmark this novel approach against paradigmatic two-dimensional spin models demonstrating unprecedented precision and system sizes. Finally, we compute the ground state phase diagram of two-dimensional lattice Rydberg atoms in optical tweezers observing non-trivial phases and quantum phase transitions, providing realistic benchmarks for current and future two-dimensional quantum simulations.

قيم البحث

اقرأ أيضاً

We study the problem of learning the Hamiltonian of a quantum many-body system given samples from its Gibbs (thermal) state. The classical analog of this problem, known as learning graphical models or Boltzmann machines, is a well-studied question in machine learning and statistics. In this work, we give the first sample-efficient algorithm for the quantum Hamiltonian learning problem. In particular, we prove that polynomially many samples in the number of particles (qudits) are necessary and sufficient for learning the parameters of a spatially local Hamiltonian in l_2-norm. Our main contribution is in establishing the strong convexity of the log-partition function of quantum many-body systems, which along with the maximum entropy estimation yields our sample-efficient algorithm. Classically, the strong convexity for partition functions follows from the Markov property of Gibbs distributions. This is, however, known to be violated in its exact form in the quantum case. We introduce several new ideas to obtain an unconditional result that avoids relying on the Markov property of quantum systems, at the cost of a slightly weaker bound. In particular, we prove a lower bound on the variance of quasi-local operators with respect to the Gibbs state, which might be of independent interest. Our work paves the way toward a more rigorous application of machine learning techniques to quantum many-body problems.
Classical machine learning (ML) provides a potentially powerful approach to solving challenging quantum many-body problems in physics and chemistry. However, the advantages of ML over more traditional methods have not been firmly established. In this work, we prove that classical ML algorithms can efficiently predict ground state properties of gapped Hamiltonians in finite spatial dimensions, after learning from data obtained by measuring other Hamiltonians in the same quantum phase of matter. In contrast, under widely accepted complexity theory assumptions, classical algorithms that do not learn from data cannot achieve the same guarantee. We also prove that classical ML algorithms can efficiently classify a wide range of quantum phases of matter. Our arguments are based on the concept of a classical shadow, a succinct classical description of a many-body quantum state that can be constructed in feasible quantum experiments and be used to predict many properties of the state. Extensive numerical experiments corroborate our theoretical results in a variety of scenarios, including Rydberg atom systems, 2D random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases.
We propose a tensor network encoding the set of all eigenstates of a fully many-body localized system in one dimension. Our construction, conceptually based on the ansatz introduced in Phys. Rev. B 94, 041116(R) (2016), is built from two layers of un itary matrices which act on blocks of $ell$ contiguous sites. We argue this yields an exponential reduction in computational time and memory requirement as compared to all previous approaches for finding a representation of the complete eigenspectrum of large many-body localized systems with a given accuracy. Concretely, we optimize the unitaries by minimizing the magnitude of the commutator of the approximate integrals of motion and the Hamiltonian, which can be done in a local fashion. This further reduces the computational complexity of the tensor networks arising in the minimization process compared to previous work. We test the accuracy of our method by comparing the approximate energy spectrum to exact diagonalization results for the random field Heisenberg model on 16 sites. We find that the technique is highly accurate deep in the localized regime and maintains a surprising degree of accuracy in predicting certain local quantities even in the vicinity of the predicted dynamical phase transition. To demonstrate the power of our technique, we study a system of 72 sites and we are able to see clear signatures of the phase transition. Our work opens a new avenue to study properties of the many-body localization transition in large systems.
Subsystems of strongly disordered, interacting quantum systems can fail to thermalize because of the phenomenon of many-body localization (MBL). In this article, we explore a tensor network description of the eigenspectra of such systems. Specificall y, we will argue that the presence of a complete set of local integrals of motion in MBL implies an efficient representation of the entire spectrum of energy eigenstates with a single tensor network, a emph{spectral} tensor network. Our results are rigorous for a class of idealized systems related to MBL with integrals of motion of finite support. In one spatial dimension, the spectral tensor network allows for the efficient computation of expectation values of a large class of operators (including local operators and string operators) in individual energy eigenstates and in ensembles.
This review paper describes the basic concept and technical details of sparse modeling and its applications to quantum many-body problems. Sparse modeling refers to methodologies for finding a small number of relevant parameters that well explain a g iven dataset. This concept reminds us physics, where the goal is to find a small number of physical laws that are hidden behind complicated phenomena. Sparse modeling extends the target of physics from natural phenomena to data, and may be interpreted as physics for data. The first half of this review introduces sparse modeling for physicists. It is assumed that readers have physics background but no expertise in data science. The second half reviews applications. Matsubara Greens function, which plays a central role in descriptions of correlated systems, has been found to be sparse, meaning that it contains little information. This leads to (i) a new method for solving the ill-conditioned inverse problem for analytical continuation, and (ii) a highly compact representation of Matsubara Greens function, which enables efficient calculations for quantum many-body systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا