No Arabic abstract
A quantum computer has now solved a specialized problem believed to be intractable for supercomputers, suggesting that quantum processors may soon outperform supercomputers on scientifically important problems. But flaws in each quantum processor limit its capability by causing errors in quantum programs, and it is currently difficult to predict what programs a particular processor can successfully run. We introduce techniques that can efficiently test the capabilities of any programmable quantum computer, and we apply them to twelve processors. Our experiments show that current hardware suffers complex errors that cause structured programs to fail up to an order of magnitude earlier - as measured by program size - than disordered ones. As a result, standard error metrics inferred from random disordered program behavior do not accurately predict performance of useful programs. Our methods provide efficient, reliable, and scalable benchmarks that can be targeted to predict quantum computer performance on real-world problems.
The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)
It is imperative that useful quantum computers be very difficult to simulate classically; otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubits $N$ or the depth $D$ of the circuit. Real quantum computing devices, however, are characterized by an exponentially decaying fidelity $mathcal{F} sim (1-epsilon)^{ND}$ with an error rate $epsilon$ per operation as small as $approx 1%$ for current devices. In this work, we demonstrate that real quantum computers can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wavefunctions using matrix product states (MPS), which capture states with low to moderate entanglement very accurately. This compression introduces a finite error rate $epsilon$ so that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly with $N$ and $D$. We illustrate our algorithms with simulations of random circuits for qubits connected in both one and two dimensional lattices. We find that $epsilon$ can be decreased at a polynomial cost in computing power down to a minimum error $epsilon_infty$. Getting below $epsilon_infty$ requires computing resources that increase exponentially with $epsilon_infty/epsilon$. For a two dimensional array of $N=54$ qubits and a circuit with Control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor three for similar computing time.
Benchmarking is how the performance of a computing system is determined. Surprisingly, even for classical computers this is not a straightforward process. One must choose the appropriate benchmark and metrics to extract meaningful results. Different benchmarks test the system in different ways and each individual metric may or may not be of interest. Choosing the appropriate approach is tricky. The situation is even more open ended for quantum computers, where there is a wider range of hardware, fewer established guidelines, and additional complicating factors. Notably, quantum noise significantly impacts performance and is difficult to model accurately. Here, we discuss benchmarking of quantum computers from a computer architecture perspective and provide numerical simulations highlighting challenges which suggest caution.
This article introduces quantum computation by analogy with probabilistic computation. A basic description of the quantum search algorithm is given by representing the algorithm as a C program in a novel way.
Traditional algorithms for simulating quantum computers on classical ones require an exponentially large amount of memory, and so typically cannot simulate general quantum circuits with more than about 30 or so qubits on a typical PC-scale platform with only a few gigabytes of main memory. However, more memory-efficient simulations are possible, requiring only polynomial or even linear space in the size of the quantum circuit being simulated. In this paper, we describe one such technique, which was recently implemented at FSU in the form of a C++ program called SEQCSim, which we releasing publicly. We also discuss the potential benefits of this simulation in quantum computing research and education, and outline some possible directions for further progress.