No Arabic abstract
Recently the question of whether the D-Wave processors exhibit large-scale quantum behavior or can be described by a classical model has attracted significant interest. In this work we address this question by studying a 503 qubit D-Wave Two device in the black box model, i.e., by studying its input-output behavior. Our work generalizes an approach introduced in Boixo et al. [Nat. Commun. 4, 2067 (2013)], and uses groups of up to 20 qubits to realize a transverse Ising model evolution with a ground state degeneracy whose distribution acts as a sensitive probe that distinguishes classical and quantum models for the D-Wave device. Our findings rule out all classical models proposed to date for the device and provide evidence that an open system quantum dynamical description of the device that starts from a quantized energy level structure is well justified, even in the presence of relevant thermal excitations and a small value of the ratio of the single-qubit decoherence time to the annealing time.
We perform an in-depth comparison of quantum annealing with several classical optimisation techniques, namely thermal annealing, Nelder-Mead, and gradient descent. We begin with a direct study of the 2D Ising model on a quantum annealer, and compare its properties directly with those of the thermal 2D Ising model. These properties include an Ising-like phase transition that can be induced by either a change in quantum-ness of the theory, or by a scaling the Ising couplings up or down. This behaviour is in accord with what is expected from the physical understanding of the quantum system. We then go on to demonstrate the efficacy of the quantum annealer at minimising several increasingly hard two dimensional potentials. For all the potentials we find the general behaviour that Nelder-Mead and gradient descent methods are very susceptible to becoming trapped in false minima, while the thermal anneal method is somewhat better at discovering the true minimum. However, and despite current limitations on its size, the quantum annealer performs a minimisation very markedly better than any of these classical techniques. A quantum anneal can be designed so that the system almost never gets trapped in a false minimum, and rapidly and successfully minimises the potentials.
Motivated by two recent experiments in which thermal properties of complex many-body systems were successfully reproduced on a commercially available quantum annealer, we examine the extent to which quantum annealing hardware can reliably sample from the thermal state associated with a target quantum Hamiltonian. We address this question by studying the thermal properties of the canonical one-dimensional transverse-field Ising model on a D-Wave 2000Q quantum annealing processor. We find that the quantum processor fails to produce the correct expectation values predicted by Quantum Monte Carlo. Comparing to master equation simulations, we find that this discrepancy is best explained by how the measurements at finite transverse fields are enacted on the device. Specifically, measurements at finite transverse field require the system to be quenched from the target Hamiltonian to a Hamiltonian with negligible transverse field, and this quench is too slow. We elaborate on how the limitations imposed by such hardware make it an unlikely candidate for studying the thermal properties of generic quantum many-body systems.
In this paper, we report an experiment about the device-independent tests of classical and quantum entropy based on a recent proposal [Phys. Rev. Lett. 115, 110501 (2015)], in which the states are encoded on the polarization of a biphoton system and measured by the state tomography technology. We also theoretically obtained the minimal quantum entropy for three widely used linear dimension witnesses. The experimental results agree well with the theoretical analysis, demonstrating that lower entropy is needed in quantum systems than that in classical systems under given values of the dimension witness.
We give efficient quantum algorithms to estimate the partition function of (i) the six vertex model on a two-dimensional (2D) square lattice, (ii) the Ising model with magnetic fields on a planar graph, (iii) the Potts model on a quasi 2D square lattice, and (iv) the Z_2 lattice gauge theory on a three-dimensional square lattice. Moreover, we prove that these problems are BQP-complete, that is, that estimating these partition functions is as hard as simulating arbitrary quantum computation. The results are proven for a complex parameter regime of the models. The proofs are based on a mapping relating partition functions to quantum circuits introduced in [Van den Nest et al., Phys. Rev. A 80, 052334 (2009)] and extended here.
Optimal flight gate assignment is a highly relevant optimization problem from airport management. Among others, an important goal is the minimization of the total transit time of the passengers. The corresponding objective function is quadratic in the binary decision variables encoding the flight-to-gate assignment. Hence, it is a quadratic assignment problem being hard to solve in general. In this work we investigate the solvability of this problem with a D-Wave quantum annealer. These machines are optimizers for quadratic unconstrained optimization problems (QUBO). Therefore the flight gate assignment problem seems to be well suited for these machines. We use real world data from a mid-sized German airport as well as simulation based data to extract typical instances small enough to be amenable to the D-Wave machine. In order to mitigate precision problems, we employ bin packing on the passenger numbers to reduce the precision requirements of the extracted instances. We find that, for the instances we investigated, the bin packing has little effect on the solution quality. Hence, we were able to solve small problem instances extracted from real data with the D-Wave 2000Q quantum annealer.