Quantum imaginary time evolution steered by reinforcement learning


Abstract in English

Quantum imaginary time evolution is a powerful algorithm to prepare ground states and thermal states on near-term quantum devices. However, algorithmic errors induced by Trotterization and local approximation severely hinder its performance. Here we propose a deep-reinforcement-learning-based method to steer the evolution and mitigate these errors. In our scheme, the well-trained agent can find the subtle evolution path where most algorithmic errors cancel out, enhancing the recovering fidelity significantly. We verified the validity of the method with the transverse-field Ising model and graph maximum cut problem. Numerical calculations and experiments on a nuclear magnetic resonance quantum computer illustrated the efficacy. The philosophy of our method, eliminating errors with errors, sheds new light on error reduction on near-term quantum devices.

Download