ﻻ يوجد ملخص باللغة العربية
Many physical systems considered promising qubit candidates are not, in fact, two-level systems. Such systems can leak out of the preferred computational states, leading to errors on any qubits that interact with leaked qubits. Without specific methods of dealing with leakage, long-lived leakage can lead to time-correlated errors. We study the impact of such time-correlated errors on topological quantum error correction codes, which are considered highly practical codes, using the repetition code as a representative case study. We show that, under physically reasonable assumptions, a threshold error rate still exists, however performance is significantly degraded. We then describe simple additional quantum circuitry that, when included in the error detection cycle, restores performance to acceptable levels.
Leakage errors occur when a quantum system leaves the two-level qubit subspace. Reducing these errors is critically important for quantum error correction to be viable. To quantify leakage errors, we use randomized benchmarking in conjunction with me
Quantum computation requires qubits that satisfy often-conflicting criteria, including scalable control and long-lasting coherence. One approach to creating a suitable qubit is to operate in an encoded subspace of several physical qubits. Though such
Fracton topological phases have a large number of materialized symmetries that enforce a rigid structure on their excitations. Remarkably, we find that the symmetries of a quantum error-correcting code based on a fracton phase enable us to design dec
We simulate four quantum error correcting codes under error models inspired by realistic noise sources in near-term ion trap quantum computers: $T_2$ dephasing, gate overrotation, and crosstalk. We use this data to find preferred codes for given erro
We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning.