Do you want to publish a course? Click here

Comparing the Overhead of Topological and Concatenated Quantum Error Correction

207   0   0.0 ( 0 )
 Added by Martin Suchara
 Publication date 2013
  fields Physics
and research's language is English




Ask ChatGPT about the research

This work compares the overhead of quantum error correction with concatenated and topological quantum error-correcting codes. To perform a numerical analysis, we use the Quantum Resource Estimator Toolbox (QuRE) that we recently developed. We use QuRE to estimate the number of qubits, quantum gates, and amount of time needed to factor a 1024-bit number on several candidate quantum technologies that differ in their clock speed and reliability. We make several interesting observations. First, topological quantum error correction requires fewer resources when physical gate error rates are high, white concatenated codes have smaller overhead for physical gate error rates below approximately 10E-7. Consequently, we show that different error-correcting codes should be chosen for two of the studied physical quantum technologies - ion traps and superconducting qubits. Second, we observe that the composition of the elementary gate types occurring in a typical logical circuit, a fault-tolerant circuit protected by the surface code, and a fault-tolerant circuit protected by a concatenated code all differ. This also suggests that choosing the most appropriate error correction technique depends on the ability of the future technology to perform specific gates efficiently.



rate research

Read More

Fault-tolerant quantum error correction is essential for implementing quantum algorithms of significant practical importance. In this work, we propose a highly effective use of the surface-GKP code, i.e., the surface code consisting of bosonic GKP qubits instead of bare two-dimensional qubits. In our proposal, we use error-corrected two-qubit gates between GKP qubits and introduce a maximum likelihood decoding strategy for correcting shift errors in the two-GKP-qubit gates. Our proposed decoding reduces the total CNOT failure rate of the GKP qubits, e.g., from $0.87%$ to $0.36%$ at a GKP squeezing of $12$dB, compared to the case where the simple closest-integer decoding is used. Then, by concatenating the GKP code with the surface code, we find that the threshold GKP squeezing is given by $9.9$dB under the the assumption that finite-squeezing of the GKP states is the dominant noise source. More importantly, we show that a low logical failure rate $p_{L} < 10^{-7}$ can be achieved with moderate hardware requirements, e.g., $291$ modes and $97$ qubits at a GKP squeezing of $12$dB as opposed to $1457$ bare qubits for the standard rotated surface code at an equivalent noise level (i.e., $p=0.36%$). Such a low failure rate of our surface-GKP code is possible through the use of space-time correlated edges in the matching graphs of the surface code decoder. Further, all edge weights in the matching graphs are computed dynamically based on analog information from the GKP error correction using the full history of all syndrome measurement rounds. We also show that a highly-squeezed GKP state of GKP squeezing $gtrsim 12$dB can be experimentally realized by using a dissipative stabilization method, namely, the Big-small-Big method, with fairly conservative experimental parameters. Lastly, we introduce a three-level ancilla scheme to mitigate ancilla decay errors during a GKP state preparation.
Fracton topological phases have a large number of materialized symmetries that enforce a rigid structure on their excitations. Remarkably, we find that the symmetries of a quantum error-correcting code based on a fracton phase enable us to design decoding algorithms. Here we propose and implement decoding algorithms for the three-dimensional X-cube model. In our example, decoding is parallelized into a series of two-dimensional matching problems, thus significantly simplifying the most time consuming component of the decoder. We also find that the rigid structure of its point excitations enable us to obtain high threshold error rates. Our decoding algorithms bring to light some key ideas that we expect to be useful in the design of decoders for general topological stabilizer codes. Moreover, the notion of parallelization unifies several concepts in quantum error correction. We conclude by discussing the broad applicability of our methods, and we explain the connection between parallelizable codes and other methods of quantum error correction. In particular we propose that the new concept represents a generalization of single-shot error correction.
Quantum information can be protected from decoherence and other errors, but only if these errors are sufficiently rare. For quantum computation to become a scalable technology, practical schemes for quantum error correction that can tolerate realistically high error rates will be necessary. In some physical systems, errors may exhibit a characteristic structure that can be carefully exploited to improve the efficacy of error correction. Here, we describe a scheme for topological quantum error correction to protect quantum information from a dephasing-biased error model, where we combine a repetition code with a topological cluster state. We find that the scheme tolerates error rates of up to 1.37%-1.83% per gate, requiring only short-range interactions in a two-dimensional array.
218 - Kosuke Fukui , Akihisa Tomita , 2018
To implement fault-tolerant quantum computation with continuous variables, the Gottesman--Kitaev--Preskill (GKP) qubit has been recognized as an important technological element. We have proposed a method to reduce the required squeezing level to realize large scale quantum computation with the GKP qubit [Phys. Rev. X. {bf 8}, 021054 (2018)], harnessing the virtue of analog information in the GKP qubits. In the present work, to reduce the number of qubits required for large scale quantum computation, we propose the tracking quantum error correction, where the logical-qubit level quantum error correction is partially substituted by the single-qubit level quantum error correction. In the proposed method, the analog quantum error correction is utilized to make the performances of the single-qubit level quantum error correction almost identical to those of the logical-qubit level quantum error correction in a practical noise level. The numerical results show that the proposed tracking quantum error correction reduces the number of qubits during a quantum error correction process by the reduction rate $left{{2(n-1)times4^{l-1}-n+1}right}/({2n times 4^{l-1}})$ for $n$-cycles of the quantum error correction process using the Knills $C_{4}/C_{6}$ code with the concatenation level $l$. Hence, the proposed tracking quantum error correction has great advantage in reducing the required number of physical qubits, and will open a new way to bring up advantage of the GKP qubits in practical quantum computation.
150 - W. Dur , H. J. Briegel 2007
We give a review on entanglement purification for bipartite and multipartite quantum states, with the main focus on theoretical work carried out by our group in the last couple of years. We discuss entanglement purification in the context of quantum communication, where we emphasize its close relation to quantum error correction. Various bipartite and multipartite entanglement purification protocols are discussed, and their performance under idealized and realistic conditions is studied. Several applications of entanglement purification in quantum communication and computation are presented, which highlights the fact that entanglement purification is a fundamental tool in quantum information processing.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا