No Arabic abstract
State preparation and measurement (SPAM) errors limit the performance of near-term quantum computers and their potential for practical application. SPAM errors are partly correctable after a calibration step that requires, for a complete implementation on a register of $n$ qubits, $2^n$ additional measurements. Here we introduce an approximate but efficient method for multiqubit SPAM error characterization and mitigation requiring the classical processing of $2^n ! times 2^n$ matrices, but only $O(4^k n^2)$ measurements, where $k=O(1)$ is the number of qubits in a correlation volume. We demonstrate and validate the technique using an IBM Q processor on registers of 4 and 8 superconducting qubits.
Several techniques have been recently introduced to mitigate errors in near-term quantum computers without the overhead required by quantum error correcting codes. While most of the focus has been on gate errors, measurement errors are significantly larger than gate errors on some platforms. A widely used {it transition matrix error mitigation} (TMEM) technique uses measured transition probabilities between initial and final classical states to correct subsequently measured data. However from a rigorous perspective, the noisy measurement should be calibrated with perfectly prepared initial states and the presence of any state-preparation error corrupts the resulting mitigation. Here we develop a measurement error mitigation technique, conditionally rigorous TMEM, that is not sensitive to state-preparation errors and thus avoids this limitation. We demonstrate the importance of the technique for high-precision measurement and for quantum foundations experiments by measuring Mermin polynomials on IBM Q superconducting qubits. An extension of the technique allows one to correct for both state-preparation and measurement (SPAM) errors in expectation values as well; we illustrate this by giving a protocol for fully SPAM-corrected quantum process tomography.
State preparation and measurement (SPAM) errors limit the performance of many gate-based quantum computing architecures, but are partly correctable after a calibration step that requires, for an exact implementation on a register of $n$ qubits, $2^n$ additional characterization experiments, as well as classical post-processing. Here we introduce an approximate but efficient method for SPAM error characterization requiring the {it classical} processing of $2^n ! times 2^n$ real matrices, but only $O(n^2)$ measurements. The technique assumes that multi-qubit measurement errors are dominated by pair correlations, which are estimated with $n(n-1)k/2$ two-qubit experiments, where $k$ is a parameter related to the accuracy. We demonstrate the technique on the IBM and Rigetti online superconducting quantum computers, allowing comparison of their SPAM errors in both magnitude and degree of correlation. We also study the correlations as a function of the registers geometric layout. We find that the pair-correlation model is fairly accurate on linear arrays of superconducting qubits. However qubits arranged in more closely spaced two-dimensional geometries exhibit significant higher-order (such as 3-qubit) SPAM error correlations.
We study the performance of quantum error correction codes(QECCs) under the detection-induced coherent error due to the imperfectness of practical implementations of stabilizer measurements, after running a quantum circuit. Considering the most promising surface code, we find that the detection-induced coherent error will result in undetected error terms, which will accumulate and evolve into logical errors. However, we show that this kind of errors will be alleviated by increasing the code size, akin to eliminating other types of errors discussed previously. We also find that with detection-induced coherent errors, the exact surface code becomes an approximate QECC.
The surface code is designed to suppress errors in quantum computing hardware and currently offers the most believable pathway to large-scale quantum computation. The surface code requires a 2-D array of nearest-neighbor coupled qubits that are capable of implementing a universal set of gates with error rates below approximately 1%, requirements compatible with experimental reality. Consequently, a number of authors are attempting to squeeze additional performance out of the surface code. We describe an optimal complexity error suppression algorithm, parallelizable to O(1) given constant computing resources per unit area, and provide evidence that this algorithm exploits correlations in the error models of each gate in an asymptotically optimal manner.
We review an experimental technique used to correct state preparation and measurement errors on gate-based quantum computers, and discuss its rigorous justification. Within a specific biased quantum measurement model, we prove that nonideal measurement of an arbitrary $n$-qubit state is equivalent to ideal projective measurement followed by a classical Markov process $Gamma$ acting on the output probability distribution. Measurement errors can be removed, with rigorous justification, if $Gamma$ can be learned and inverted. We show how to obtain $Gamma$ from gate set tomography (R. Blume-Kohout et al., arXiv:1310.4492) and apply the error correction technique to single IBM Q superconducting qubits.