ﻻ يوجد ملخص باللغة العربية
We introduce reinforcement learning (RL) formulations of the problem of finding the ground state of a many-body quantum mechanical model defined on a lattice. We show that stoquastic Hamiltonians - those without a sign problem - have a natural decomposition into stochastic dynamics and a potential representing a reward function. The mapping to RL is developed for both continuous and discrete time, based on a generalized Feynman-Kac formula in the former case and a stochastic representation of the Schrodinger equation in the latter. We discuss the application of this mapping to the neural representation of quantum states, spelling out the advantages over approaches based on direct representation of the wavefunction of the system.
Finding the ground state of a quantum mechanical system can be formulated as an optimal control problem. In this formulation, the drift of the optimally controlled process is chosen to match the distribution of paths in the Feynman--Kac (FK) represen
The phenomenon of many-body localisation received a lot of attention recently, both for its implications in condensed-matter physics of allowing systems to be an insulator even at non-zero temperature as well as in the context of the foundations of q
The control of many-body quantum dynamics in complex systems is a key challenge in the quest to reliably produce and manipulate large-scale quantum entangled states. Recently, quench experiments in Rydberg atom arrays (Bluvstein et. al., arXiv:2012.1
One of the key applications for the emerging quantum simulators is to emulate the ground state of many-body systems, as it is of great interest in various fields from condensed matter physics to material science. Traditionally, in an analog sense, ad
We investigate the occurrence of the phenomenon of many-body localization (MBL) on a D-Wave 2000Q programmable quantum annealer. We study a spin-1/2 transverse-field Ising model defined on a Chimera connectivity graph, with random exchange interactio