No Arabic abstract
The typical model for measurement noise in quantum error correction is to randomly flip the binary measurement outcome. In experiments, measurements yield much richer information - e.g., continuous current values, discrete photon counts - which is then mapped into binary outcomes by discarding some of this information. In this work, we consider methods to incorporate all of this richer information, typically called soft information, into the decoding of quantum error correction codes, and in particular the surface code. We describe how to modify both the Minimum Weight Perfect Matching and Union-Find decoders to leverage soft information, and demonstrate these soft decoders outperform the standard (hard) decoders that can only access the binary measurement outcomes. Moreover, we observe that the soft decoder achieves a threshold 25% higher than any hard decoder for phenomenological noise with Gaussian soft measurement outcomes. We also introduce a soft measurement error model with amplitude damping, in which measurement time leads to a trade-off between measurement resolution and additional disturbance of the qubits. Under this model we observe that the performance of the surface code is very sensitive to the choice of the measurement time - for a distance-19 surface code, a five-fold increase in measurement time can lead to a thousand-fold increase in logical error rate. Moreover, the measurement time that minimizes the physical error rate is distinct from the one that minimizes the logical performance, pointing to the benefits of jointly optimizing the physical and quantum error correction layers.
Extensive quantum error correction is necessary in order to perform a useful computation on a noisy quantum computer. Moreover, quantum error correction must be implemented based on imperfect parity check measurements that may return incorrect outcomes or inject additional faults into the qubits. To achieve fault-tolerant error correction, Shor proposed to repeat the sequence of parity check measurements until the same outcome is observed sufficiently many times. Then, one can use this information to perform error correction. A basic implementation of this fault tolerance strategy requires $Omega(r d^2)$ parity check measurements for a distance-d code defined by r parity checks. For some specific highly structured quantum codes, Bombin has shown that single-shot fault-tolerant quantum error correction is possible using only r measurements. In this work, we demonstrate that fault-tolerant quantum error correction can be achieved using $O(d log(d))$ measurements for any code with distance $d geq Omega(n^alpha)$ for some constant $alpha > 0$. Moreover, we prove the existence of a sub-single-shot fault-tolerant quantum error correction scheme using fewer than r measurements. In some cases, the number of parity check measurements required for fault-tolerant quantum error correction is exponentially smaller than the number of parity checks defining the code.
Quantum f-divergences are a quantum generalization of the classical notion of f-divergences, and are a special case of Petz quasi-entropies. Many well known distinguishability measures of quantum states are given by, or derived from, f-divergences; special examples include the quantum relative entropy, the Renyi relative entropies, and the Chernoff and Hoeffding measures. Here we show that the quantum f-divergences are monotonic under the dual of Schwarz maps whenever the defining function is operator convex. This extends and unifies all previously known monotonicity results. We also analyze the case where the monotonicity inequality holds with equality, and extend Petz reversibility theorem for a large class of f-divergences and other distinguishability measures. We apply our findings to the problem of quantum error correction, and show that if a stochastic map preserves the pairwise distinguishability on a set of states, as measured by a suitable f-divergence, then its action can be reversed on that set by another stochastic map that can be constructed from the original one in a canonical way. We also provide an integral representation for operator convex functions on the positive half-line, which is the main ingredient in extending previously known results on the monotonicity inequality and the case of equality. We also consider some special cases where the convexity of f is sufficient for the monotonicity, and obtain the inverse Holder inequality for operators as an application. The presentation is completely self-contained and requires only standard knowledge of matrix analysis.
Quantum key distribution (QKD) offers a practical solution for secure communication between two distinct parties via a quantum channel and an authentic public channel. In this work, we consider different approaches to the quantum bit error rate (QBER) estimation at the information reconciliation stage of the post-processing procedure. For reconciliation schemes employing low-density parity-check (LDPC) codes, we develop a novel syndrome-based QBER estimation algorithm. The algorithm suggested is suitable for irregular LDPC codes and takes into account punctured and shortened bits. Testing our approach in a real QKD setup, we show that an approach combining the proposed algorithm with conventional QBER estimation techniques allows one to improve the accuracy of the QBER estimation.
The standard quantum error correction protocols use projective measurements to extract the error syndromes from the encoded states. We consider the more general scenario of weak measurements, where only partial information about the error syndrome can be extracted from the encoded state. We construct a feedback protocol that probabilistically corrects the error based on the extracted information. Using numerical simulations of one-qubit error correction codes, we show that our error correction succeeds for a range of the weak measurement strength, where (a) the error rate is below the threshold beyond which multiple errors dominate, and (b) the error rate is less than the rate at which weak measurement extracts information. It is also obvious that error correction with too small a measurement strength should be avoided.
Quantum error correction (QEC) is one of the central concepts in quantum information science and also has wide applications in fundamental physics. The capacity theorems provide solid foundations of QEC. We here provide a general and highly applicable form of capacity theorem for both classical and quantum information, i.e., hybrid information, with assistance of a limited resource of entanglement in one-shot scenario, which covers broader situations than the existing ones. Harnessing the wide applicability of the theorem, we show that a demonstration of QEC by short random quantum circuits is feasible and that QEC is intrinsic in quantum chaotic systems. Our results bridge the progress in quantum information theory, near-future quantum technology, and fundamental physics.