No Arabic abstract
The remarkable discovery of Quantum Error Correction (QEC), which can overcome the errors experienced by a bit of quantum information (qubit), was a critical advance that gives hope for eventually realizing practical quantum computers. In principle, a system that implements QEC can actually pass a break-even point and preserve quantum information for longer than the lifetime of its constituent parts. Reaching the break-even point, however, has thus far remained an outstanding and challenging goal. Several previous works have demonstrated elements of QEC in NMR, ions, nitrogen vacancy (NV) centers, photons, and superconducting transmons. However, these works primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to extend the lifetime of quantum information over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of coherent states, or cat states of a superconducting resonator. Moreover, the experiment implements a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode, and correct. As measured by full process tomography, the enhanced lifetime of the encoded information is 320 microseconds without any post-selection. This is 20 times greater than that of the systems transmon, over twice as long as an uncorrected logical encoding, and 10% longer than the highest quality element of the system (the resonators 0, 1 Fock states). Our results illustrate the power of novel, hardware efficient qubit encodings over traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming the basic concepts to exploring the metrics that drive system performance and the challenges in implementing a fault-tolerant system.
The typical model for measurement noise in quantum error correction is to randomly flip the binary measurement outcome. In experiments, measurements yield much richer information - e.g., continuous current values, discrete photon counts - which is then mapped into binary outcomes by discarding some of this information. In this work, we consider methods to incorporate all of this richer information, typically called soft information, into the decoding of quantum error correction codes, and in particular the surface code. We describe how to modify both the Minimum Weight Perfect Matching and Union-Find decoders to leverage soft information, and demonstrate these soft decoders outperform the standard (hard) decoders that can only access the binary measurement outcomes. Moreover, we observe that the soft decoder achieves a threshold 25% higher than any hard decoder for phenomenological noise with Gaussian soft measurement outcomes. We also introduce a soft measurement error model with amplitude damping, in which measurement time leads to a trade-off between measurement resolution and additional disturbance of the qubits. Under this model we observe that the performance of the surface code is very sensitive to the choice of the measurement time - for a distance-19 surface code, a five-fold increase in measurement time can lead to a thousand-fold increase in logical error rate. Moreover, the measurement time that minimizes the physical error rate is distinct from the one that minimizes the logical performance, pointing to the benefits of jointly optimizing the physical and quantum error correction layers.
Quantum error correction (QEC) is one of the central concepts in quantum information science and also has wide applications in fundamental physics. The capacity theorems provide solid foundations of QEC. We here provide a general and highly applicable form of capacity theorem for both classical and quantum information, i.e., hybrid information, with assistance of a limited resource of entanglement in one-shot scenario, which covers broader situations than the existing ones. Harnessing the wide applicability of the theorem, we show that a demonstration of QEC by short random quantum circuits is feasible and that QEC is intrinsic in quantum chaotic systems. Our results bridge the progress in quantum information theory, near-future quantum technology, and fundamental physics.
To implement fault-tolerant quantum computation with continuous variables, the Gottesman--Kitaev--Preskill (GKP) qubit has been recognized as an important technological element. We have proposed a method to reduce the required squeezing level to realize large scale quantum computation with the GKP qubit [Phys. Rev. X. {bf 8}, 021054 (2018)], harnessing the virtue of analog information in the GKP qubits. In the present work, to reduce the number of qubits required for large scale quantum computation, we propose the tracking quantum error correction, where the logical-qubit level quantum error correction is partially substituted by the single-qubit level quantum error correction. In the proposed method, the analog quantum error correction is utilized to make the performances of the single-qubit level quantum error correction almost identical to those of the logical-qubit level quantum error correction in a practical noise level. The numerical results show that the proposed tracking quantum error correction reduces the number of qubits during a quantum error correction process by the reduction rate $left{{2(n-1)times4^{l-1}-n+1}right}/({2n times 4^{l-1}})$ for $n$-cycles of the quantum error correction process using the Knills $C_{4}/C_{6}$ code with the concatenation level $l$. Hence, the proposed tracking quantum error correction has great advantage in reducing the required number of physical qubits, and will open a new way to bring up advantage of the GKP qubits in practical quantum computation.
Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon, for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.
Quantum error correction (QEC) is an essential concept for any quantum information processing device. Typically, QEC is designed with minimal assumptions about the noise process; this generic assumption exacts a high cost in efficiency and performance. In physical systems, errors are not likely to be arbitrary; rather we will have reasonable models for the structure of quantum decoherence. We may choose quantum error correcting codes and recovery operations that specifically target the most likely errors. We present a convex optimization method to determine the optimal (in terms of average entanglement fidelity) recovery operation for a given channel, encoding, and information source. This is solvable via a semidefinite program (SDP). We present computational algorithms to generate near-optimal recovery operations structured to begin with a projective syndrome measurement. These structured operations are more computationally scalable than the SDP required for computing the optimal; we can thus numerically analyze longer codes. Using Lagrange duality, we bound the performance of the structured recovery operations and show that they are nearly optimal in many relevant cases. We present two classes of channel-adapted quantum error correcting codes specifically designed for the amplitude damping channel. These have significantly higher rates with shorter block lengths than corresponding generic quantum error correcting codes. Both classes are stabilizer codes, and have good fidelity performance with stabilizer recovery operations. The encoding, syndrome measurement, and syndrome recovery operations can all be implemented with Clifford group operations.