Do you want to publish a course? Click here

Quantum Error Correction Alleviates Detection-Induced Coherent Errors

120   0   0.0 ( 0 )
 Added by Dong Liu
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

We study the performance of quantum error correction codes(QECCs) under the detection-induced coherent error due to the imperfectness of practical implementations of stabilizer measurements, after running a quantum circuit. Considering the most promising surface code, we find that the detection-induced coherent error will result in undetected error terms, which will accumulate and evolve into logical errors. However, we show that this kind of errors will be alleviated by increasing the code size, akin to eliminating other types of errors discussed previously. We also find that with detection-induced coherent errors, the exact surface code becomes an approximate QECC.



rate research

Read More

162 - M. McEwen , D. Kafri , Z. Chen 2021
Quantum computing can become scalable through error correction, but logical error rates only decrease with system size when physical errors are sufficiently uncorrelated. During computation, unused high energy levels of the qubits can become excited, creating leakage states that are long-lived and mobile. Particularly for superconducting transmon qubits, this leakage opens a path to errors that are correlated in space and time. Here, we report a reset protocol that returns a qubit to the ground state from all relevant higher level states. We test its performance with the bit-flip stabilizer code, a simplified version of the surface code for quantum error correction. We investigate the accumulation and dynamics of leakage during error correction. Using this protocol, we find lower rates of logical errors and an improved scaling and stability of error suppression with increasing qubit number. This demonstration provides a key step on the path towards scalable quantum computing.
100 - Ognyan Oreshkov 2008
In the theory of operator quantum error correction (OQEC), the notion of correctability is defined under the assumption that states are perfectly initialized inside a particular subspace, a factor of which (a subsystem) contains the protected information. If the initial state of the system does not belong entirely to the subspace in question, the restriction of the state to the otherwise correctable subsystem may not remain invariant after the application of noise and error correction. It is known that in the case of decoherence-free subspaces and subsystems (DFSs) the condition for perfect unitary evolution inside the code imposes more restrictive conditions on the noise process if one allows imperfect initialization. It was believed that these conditions are necessary if DFSs are to be able to protect imperfectly encoded states from subsequent errors. By a similar argument, general OQEC codes would also require more restrictive error-correction conditions for the case of imperfect initialization. In this study, we examine this requirement by looking at the errors on the encoded state. In order to quantitatively analyze the errors in an OQEC code, we introduce a measure of the fidelity between the encoded information in two states for the case of subsystem encoding. A major part of the paper concerns the definition of the measure and the derivation of its properties. In contrast to what was previously believed, we obtain that more restrictive conditions are not necessary neither for DFSs nor for general OQEC codes. This is because the effective noise that can arise inside the code as a result of imperfect initialization is such that it can only increase the fidelity of an imperfectly encoded state with a perfectly encoded one.
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault tolerant quantum error correcting circuit for a $d=3$ Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudo-threshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
218 - Kosuke Fukui , Akihisa Tomita , 2018
To implement fault-tolerant quantum computation with continuous variables, the Gottesman--Kitaev--Preskill (GKP) qubit has been recognized as an important technological element. We have proposed a method to reduce the required squeezing level to realize large scale quantum computation with the GKP qubit [Phys. Rev. X. {bf 8}, 021054 (2018)], harnessing the virtue of analog information in the GKP qubits. In the present work, to reduce the number of qubits required for large scale quantum computation, we propose the tracking quantum error correction, where the logical-qubit level quantum error correction is partially substituted by the single-qubit level quantum error correction. In the proposed method, the analog quantum error correction is utilized to make the performances of the single-qubit level quantum error correction almost identical to those of the logical-qubit level quantum error correction in a practical noise level. The numerical results show that the proposed tracking quantum error correction reduces the number of qubits during a quantum error correction process by the reduction rate $left{{2(n-1)times4^{l-1}-n+1}right}/({2n times 4^{l-1}})$ for $n$-cycles of the quantum error correction process using the Knills $C_{4}/C_{6}$ code with the concatenation level $l$. Hence, the proposed tracking quantum error correction has great advantage in reducing the required number of physical qubits, and will open a new way to bring up advantage of the GKP qubits in practical quantum computation.
217 - Andrew S. Fletcher 2007
Quantum error correction (QEC) is an essential concept for any quantum information processing device. Typically, QEC is designed with minimal assumptions about the noise process; this generic assumption exacts a high cost in efficiency and performance. In physical systems, errors are not likely to be arbitrary; rather we will have reasonable models for the structure of quantum decoherence. We may choose quantum error correcting codes and recovery operations that specifically target the most likely errors. We present a convex optimization method to determine the optimal (in terms of average entanglement fidelity) recovery operation for a given channel, encoding, and information source. This is solvable via a semidefinite program (SDP). We present computational algorithms to generate near-optimal recovery operations structured to begin with a projective syndrome measurement. These structured operations are more computationally scalable than the SDP required for computing the optimal; we can thus numerically analyze longer codes. Using Lagrange duality, we bound the performance of the structured recovery operations and show that they are nearly optimal in many relevant cases. We present two classes of channel-adapted quantum error correcting codes specifically designed for the amplitude damping channel. These have significantly higher rates with shorter block lengths than corresponding generic quantum error correcting codes. Both classes are stabilizer codes, and have good fidelity performance with stabilizer recovery operations. The encoding, syndrome measurement, and syndrome recovery operations can all be implemented with Clifford group operations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا