No Arabic abstract
We review the development of generative modeling techniques in machine learning for the purpose of reconstructing real, noisy, many-qubit quantum states. Motivated by its interpretability and utility, we discuss in detail the theory of the restricted Boltzmann machine. We demonstrate its practical use for state reconstruction, starting from a classical thermal distribution of Ising spins, then moving systematically through increasingly complex pure and mixed quantum states. Intended for use on experimental noisy intermediate-scale quantum (NISQ) devices, we review recent efforts in reconstruction of a cold atom wavefunction. Finally, we discuss the outlook for future experimental state reconstruction using machine learning, in the NISQ era and beyond.
The classification of big data usually requires a mapping onto new data clusters which can then be processed by machine learning algorithms by means of more efficient and feasible linear separators. Recently, Lloyd et al. have advanced the proposal to embed classical data into quantum ones: these live in the more complex Hilbert space where they can get split into linearly separable clusters. Here, we implement these ideas by engineering two different experimental platforms, based on quantum optics and ultra-cold atoms respectively, where we adapt and numerically optimize the quantum embedding protocol by deep learning methods, and test it for some trial classical data. We perform also a similar analysis on the Rigetti superconducting quantum computer. Therefore, we find that the quantum embedding approach successfully works also at the experimental level and, in particular, we show how different platforms could work in a complementary fashion to achieve this task. These studies might pave the way for future investigations on quantum machine learning techniques especially based on hybrid quantum technologies.
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of todays classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.
The design, accurate preparation and manipulation of quantum states in quantum circuits are essential operational tasks at the heart of quantum technologies. Nowadays, circuits can be designed with physical parameters that can be controlled with unprecedented accuracy and flexibility. However, the generation of well-controlled current states is still a nagging bottleneck, especially when different circuit elements are integrated together. In this work, we show how machine learning can effectively address this challenge and outperform the current existing methods. To this end, we exploit deep reinforcement learning to prepare prescribed quantum current states in circuits composed of lumped elements. To highlight our method, we show how to engineer bosonic persistent currents as they are relevant in different quantum technologies as cold atoms and superconducting circuits. We demonstrate the use of deep reinforcement learning to re-discover established protocols, as well as solve configurations that are difficult to treat with other methods. With our approach, quantum current states characterised by a single winding number or entangled currents of multiple winding numbers can be prepared in a robust manner, superseding the existing protocols.
Large-scale quantum devices provide insights beyond the reach of classical simulations. However, for a reliable and verifiable quantum simulation, the building blocks of the quantum device require exquisite benchmarking. This benchmarking of large scale dynamical quantum systems represents a major challenge due to lack of efficient tools for their simulation. Here, we present a scalable algorithm based on neural networks for Hamiltonian tomography in out-of-equilibrium quantum systems. We illustrate our approach using a model for a forefront quantum simulation platform: ultracold atoms in optical lattices. Specifically, we show that our algorithm is able to reconstruct the Hamiltonian of an arbitrary size quasi-1D bosonic system using an accessible amount of experimental measurements. We are able to significantly increase the previously known parameter precision.
In this paper, we apply machine learning methods to study phase transitions in certain statistical mechanical models on the two dimensional lattices, whose transitions involve non-local or topological properties, including site and bond percolations, the XY model and the generalized XY model. We find that using just one hidden layer in a fully-connected neural network, the percolation transition can be learned and the data collapse by using the average output layer gives correct estimate of the critical exponent $ u$. We also study the Berezinskii-Kosterlitz-Thouless transition, which involves binding and unbinding of topological defects---vortices and anti-vortices, in the classical XY model. The generalized XY model contains richer phases, such as the nematic phase, the paramagnetic and the quasi-long-range ferromagnetic phases, and we also apply machine learning method to it. We obtain a consistent phase diagram from the network trained with only data along the temperature axis at two particular parameter $Delta$ values, where $Delta$ is the relative weight of pure XY coupling. Besides using the spin configurations (either angles or spin components) as the input information in a convolutional neural network, we devise a feature engineering approach using the histograms of the spin orientations in order to train the network to learn the three phases in the generalized XY model and demonstrate that it indeed works. The trained network by using system size $Ltimes L$ can be used to the phase diagram for other sizes ($Ltimes L$, where $L e L$) without any further training.