No Arabic abstract
Realizing the potential of quantum computing will require achieving sufficiently low logical error rates. Many applications call for error rates in the $10^{-15}$ regime, but state-of-the-art quantum platforms typically have physical error rates near $10^{-3}$. Quantum error correction (QEC) promises to bridge this divide by distributing quantum logical information across many physical qubits so that errors can be detected and corrected. Logical errors are then exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold. QEC also requires that the errors are local and that performance is maintained over many rounds of error correction, two major outstanding experimental challenges. Here, we implement 1D repetition codes embedded in a 2D grid of superconducting qubits which demonstrate exponential suppression of bit or phase-flip errors, reducing logical error per round by more than $100times$ when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analyzing error correlations with high precision, and characterize the locality of errors in a device performing QEC for the first time. Finally, we perform error detection using a small 2D surface code logical qubit on the same device, and show that the results from both 1D and 2D codes agree with numerical simulations using a simple depolarizing error model. These findings demonstrate that superconducting qubits are on a viable path towards fault tolerant quantum computing.
We develop a classical bit-flip correction method to mitigate measurement errors on quantum computers. This method can be applied to any operator, any number of qubits, and any realistic bit-flip probability. We first demonstrate the successful performance of this method by correcting the noisy measurements of the ground-state energy of the longitudinal Ising model. We then generalize our results to arbitrary operators and test our method both numerically and experimentally on IBM quantum hardware. As a result, our correction method reduces the measurement error on the quantum hardware by up to one order of magnitude. We finally discuss how to pre-process the method and extend it to other errors sources beyond measurement errors. For local Hamiltonians, the overhead costs are polynomial in the number of qubits, even if multi-qubit correlations are included.
The accumulation of quantum phase in response to a signal is the central mechanism of quantum sensing, as such, loss of phase information presents a fundamental limitation. For this reason approaches to extend quantum coherence in the presence of noise are actively being explored. Here we experimentally protect a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the timescales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multi-qubit systems provide for quantum sensing and offers an important complement to quantum control techniques. In particular, our work opens the door for detecting minute signals in the presence of high frequency noise, where standard protocols reach their limits.
Quantum data is susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction (QEC) to actively protect against both. In the smallest QEC codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Experimental demonstrations of QEC to date, using nuclear magnetic resonance, trapped ions, photons, superconducting qubits, and NV centers in diamond, have circumvented stabilizers at the cost of decoding at the end of a QEC cycle. This decoding leaves the quantum information vulnerable to physical qubit errors until re-encoding, violating a basic requirement for fault tolerance. Using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. We construct these stabilizers as parallelized indirect measurements using ancillary qubits, and evidence their non-demolition character by generating three-qubit entanglement from superposition states. We demonstrate stabilizer-based quantum error detection (QED) by subjecting a logical qubit to coherent and incoherent bit-flip errors on its constituent physical qubits. While increased physical qubit coherence times and shorter QED blocks are required to actively safeguard quantum information, this demonstration is a critical step toward larger codes based on multiple parity measurements.
In the theory of operator quantum error correction (OQEC), the notion of correctability is defined under the assumption that states are perfectly initialized inside a particular subspace, a factor of which (a subsystem) contains the protected information. If the initial state of the system does not belong entirely to the subspace in question, the restriction of the state to the otherwise correctable subsystem may not remain invariant after the application of noise and error correction. It is known that in the case of decoherence-free subspaces and subsystems (DFSs) the condition for perfect unitary evolution inside the code imposes more restrictive conditions on the noise process if one allows imperfect initialization. It was believed that these conditions are necessary if DFSs are to be able to protect imperfectly encoded states from subsequent errors. By a similar argument, general OQEC codes would also require more restrictive error-correction conditions for the case of imperfect initialization. In this study, we examine this requirement by looking at the errors on the encoded state. In order to quantitatively analyze the errors in an OQEC code, we introduce a measure of the fidelity between the encoded information in two states for the case of subsystem encoding. A major part of the paper concerns the definition of the measure and the derivation of its properties. In contrast to what was previously believed, we obtain that more restrictive conditions are not necessary neither for DFSs nor for general OQEC codes. This is because the effective noise that can arise inside the code as a result of imperfect initialization is such that it can only increase the fidelity of an imperfectly encoded state with a perfectly encoded one.
We study the performance of quantum error correction codes(QECCs) under the detection-induced coherent error due to the imperfectness of practical implementations of stabilizer measurements, after running a quantum circuit. Considering the most promising surface code, we find that the detection-induced coherent error will result in undetected error terms, which will accumulate and evolve into logical errors. However, we show that this kind of errors will be alleviated by increasing the code size, akin to eliminating other types of errors discussed previously. We also find that with detection-induced coherent errors, the exact surface code becomes an approximate QECC.