No Arabic abstract
Quantum computers promise to solve certain problems more efficiently than their digital counterparts. A major challenge towards practically useful quantum computing is characterizing and reducing the various errors that accumulate during an algorithm running on large-scale processors. Current characterization techniques are unable to adequately account for the exponentially large set of potential errors, including cross-talk and other correlated noise sources. Here we develop cycle benchmarking, a rigorous and practically scalable protocol for characterizing local and global errors across multi-qubit quantum processors. We experimentally demonstrate its practicality by quantifying such errors in non-entangling and entangling operations on an ion-trap quantum computer with up to 10 qubits, with total process fidelities for multi-qubit entangling gates ranging from 99.6(1)% for 2 qubits to 86(2)% for 10 qubits. Furthermore, cycle benchmarking data validates that the error rate per single-qubit gate and per two-qubit coupling does not increase with increasing system size.
Benchmarking is how the performance of a computing system is determined. Surprisingly, even for classical computers this is not a straightforward process. One must choose the appropriate benchmark and metrics to extract meaningful results. Different benchmarks test the system in different ways and each individual metric may or may not be of interest. Choosing the appropriate approach is tricky. The situation is even more open ended for quantum computers, where there is a wider range of hardware, fewer established guidelines, and additional complicating factors. Notably, quantum noise significantly impacts performance and is difficult to model accurately. Here, we discuss benchmarking of quantum computers from a computer architecture perspective and provide numerical simulations highlighting challenges which suggest caution.
Trapped ions (TI) are a leading candidate for building Noisy Intermediate-Scale Quantum (NISQ) hardware. TI qubits have fundamental advantages over other technologies such as superconducting qubits, including high qubit quality, coherence and connectivity. However, current TI systems are small in size, with 5-20 qubits and typically use a single trap architecture which has fundamental scalability limitations. To progress towards the next major milestone of 50-100 qubits, a modular architecture termed the Quantum Charge Coupled Device (QCCD) has been proposed. In a QCCD-based TI device, small traps are connected through ion shuttling. While the basic hardware components for such devices have been demonstrated, building a 50-100 qubit system is challenging because of a wide range of design possibilities for trap sizing, communication topology and gate implementations and the need to match diverse application resource requirements. Towards realizing QCCD systems with 50-100 qubits, we perform an extensive architectural study evaluating the key design choices of trap sizing, communication topology and operation implementation methods. We built a design toolflow which takes a QCCD architectures parameters as input, along with a set of applications and realistic hardware performance models. Our toolflow maps the applications onto the target device and simulates their execution to compute metrics such as application run time, reliability and device noise rates. Using six applications and several hardware design points, we show that trap sizing and communication topology choices can impact application reliability by up to three orders of magnitude. Microarchitectural gate implementation choices influence reliability by another order of magnitude. From these studies, we provide concrete recommendations to tune these choices to achieve highly reliable and performant application executions.
Quantum computers are on the verge of becoming a commercially available reality. They represent a paradigm shift in computing, with a steep learning gradient. The creation of games is a way to ease the transition for beginners. We present a game similar to the Poker variant Texas hold em with the intention to serve as an engaging pedagogical tool to learn the basics rules of quantum computing. The concepts of quantum states, quantum operations and measurement can be learned in a playful manner. The difference to the classical variant is that the community cards are replaced by a quantum register that is randomly initialized, and the cards for each player are replaced by quantum gates, randomly drawn from a set of available gates. Each player can create a quantum circuit with their cards, with the aim to maximize the number of $1$s that are measured in the computational basis. The basic concepts of superposition, entanglement and quantum gates are employed. We provide a proof-of-concept implementation using Qiskit. A comparison of the results for the created circuits using a simulator and IBM machines is conducted, showing that error rates on contemporary quantum computers are still very high. For the success of noisy intermediate scale quantum (NISQ) computers, improvements on the error rates and error mitigation techniques are necessary, even for simple circuits. We show that quantum error mitigation (QEM) techniques can be used to improve expectation values of observables on real quantum devices.
We aim to devise feasible, efficient verification schemes for bosonic channels. To this end, we construct an average-fidelity witness that yields a tight lower bound for average fidelity plus a general framework for verifying optimal quantum channels. For both multi-mode unitary Gaussian channels and single-mode amplification channels, we present experimentally feasible average-fidelity witnesses and reliable verification schemes, for which sample complexity scales polynomially with respect to all channel specification parameters. Our verification scheme provides an approach to benchmark the performance of bosonic channels on a set of Gaussian-distributed coherent states by employing only two-mode squeezed vacuum states and local homodyne detections. Our results demonstrate how to perform feasible tests of quantum components designed for continuous-variable quantum information processing.
Crosstalk is a major source of noise in Noisy Intermediate-Scale Quantum (NISQ) systems and is a fundamental challenge for hardware design. When multiple instructions are executed in parallel, crosstalk between the instructions can corrupt the quantum state and lead to incorrect program execution. Our goal is to mitigate the application impact of crosstalk noise through software techniques. This requires (i) accurate characterization of hardware crosstalk, and (ii) intelligent instruction scheduling to serialize the affected operations. Since crosstalk characterization is computationally expensive, we develop optimizations which reduce the characterization overhead. On three 20-qubit IBMQ systems, we demonstrate two orders of magnitude reduction in characterization time (compute time on the QC device) compared to all-pairs crosstalk measurements. Informed by these characterization, we develop a scheduler that judiciously serializes high crosstalk instructions balancing the need to mitigate crosstalk and exponential decoherence errors from serialization. On real-system runs on three IBMQ systems, our scheduler improves the error rate of application circuits by up to 5.6x, compared to the IBM instruction scheduler and offers near-optimal crosstalk mitigation in practice. In a broader picture, the difficulty of mitigating crosstalk has recently driven QC vendors to move towards sparser qubit connectivity or disabling nearby operations entirely in hardware, which can be detrimental to performance. Our work makes the case for software mitigation of crosstalk errors.