Do you want to publish a course? Click here

Quantum error correction in crossbar architectures

143   0   0.0 ( 0 )
 Added by Jonas Helsen
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so called crossbar architectures. Recently we made a proposal for a large scale quantum processor~[Li et al. arXiv:1711.03807 (2017)] to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.



rate research

Read More

218 - Kosuke Fukui , Akihisa Tomita , 2018
To implement fault-tolerant quantum computation with continuous variables, the Gottesman--Kitaev--Preskill (GKP) qubit has been recognized as an important technological element. We have proposed a method to reduce the required squeezing level to realize large scale quantum computation with the GKP qubit [Phys. Rev. X. {bf 8}, 021054 (2018)], harnessing the virtue of analog information in the GKP qubits. In the present work, to reduce the number of qubits required for large scale quantum computation, we propose the tracking quantum error correction, where the logical-qubit level quantum error correction is partially substituted by the single-qubit level quantum error correction. In the proposed method, the analog quantum error correction is utilized to make the performances of the single-qubit level quantum error correction almost identical to those of the logical-qubit level quantum error correction in a practical noise level. The numerical results show that the proposed tracking quantum error correction reduces the number of qubits during a quantum error correction process by the reduction rate $left{{2(n-1)times4^{l-1}-n+1}right}/({2n times 4^{l-1}})$ for $n$-cycles of the quantum error correction process using the Knills $C_{4}/C_{6}$ code with the concatenation level $l$. Hence, the proposed tracking quantum error correction has great advantage in reducing the required number of physical qubits, and will open a new way to bring up advantage of the GKP qubits in practical quantum computation.
225 - Andrew S. Fletcher 2007
Quantum error correction (QEC) is an essential concept for any quantum information processing device. Typically, QEC is designed with minimal assumptions about the noise process; this generic assumption exacts a high cost in efficiency and performance. In physical systems, errors are not likely to be arbitrary; rather we will have reasonable models for the structure of quantum decoherence. We may choose quantum error correcting codes and recovery operations that specifically target the most likely errors. We present a convex optimization method to determine the optimal (in terms of average entanglement fidelity) recovery operation for a given channel, encoding, and information source. This is solvable via a semidefinite program (SDP). We present computational algorithms to generate near-optimal recovery operations structured to begin with a projective syndrome measurement. These structured operations are more computationally scalable than the SDP required for computing the optimal; we can thus numerically analyze longer codes. Using Lagrange duality, we bound the performance of the structured recovery operations and show that they are nearly optimal in many relevant cases. We present two classes of channel-adapted quantum error correcting codes specifically designed for the amplitude damping channel. These have significantly higher rates with shorter block lengths than corresponding generic quantum error correcting codes. Both classes are stabilizer codes, and have good fidelity performance with stabilizer recovery operations. The encoding, syndrome measurement, and syndrome recovery operations can all be implemented with Clifford group operations.
Based on the group structure of a unitary Lie algebra, a scheme is provided to systematically and exhaustively generate quantum error correction codes, including the additive and nonadditive codes. The syndromes in the process of error-correction distinguished by different orthogonal vector subspaces, the coset subspaces. Moreover, the generated codes can be classified into four types with respect to the spinors in the unitary Lie algebra and a chosen initial quantum state.
278 - Ognyan Oreshkov 2013
Continuous-time quantum error correction (CTQEC) is an approach to protecting quantum information from noise in which both the noise and the error correcting operations are treated as processes that are continuous in time. This chapter investigates CTQEC based on continuous weak measurements and feedback from the point of view of the subsystem principle, which states that protected quantum information is contained in a subsystem of the Hilbert space. We study how to approach the problem of constructing CTQEC protocols by looking at the evolution of the state of the system in an encoded basis in which the subsystem containing the protected information is explicit. This point of view allows us to reduce the problem to that of protecting a known state, and to design CTQEC procedures from protocols for the protection of a single qubit. We show how previously studied CTQEC schemes with both direct and indirect feedback can be obtained from strategies for the protection of a single qubit via weak measurements and weak unitary operations. We also review results on the performance of CTQEC with direct feedback in cases of Markovian and non-Markovian decoherence, where we have shown that due to the existence of a Zeno regime in non-Markovian dynamics, the performance of CTQEC can exhibit a quadratic improvement if the time resolution of the weak error-correcting operations is high enough to reveal the non-Markovian character of the noise process.
We consider error correction in quantum key distribution. To avoid that Alice and Bob unwittingly end up with different keys precautions must be taken. Before running the error correction protocol, Bob and Alice normally sacrifice some bits to estimate the error rate. To reduce the probability that they end up with different keys to an acceptable level, we show that a large number of bits must be sacrificed. Instead, if Alice and Bob can make a good guess about the error rate before the error correction, they can verify that their keys are similar after the error correction protocol. This verification can be done by utilizing properties of Low Density Parity Check codes used in the error correction. We compare the methods and show that by verification it is often possible to sacrifice less bits without compromising security. The improvement is heavily dependent on the error rate and the block length, but for a key produced by the IdQuantique system Clavis^2, the increase in the key rate is approximately 5 percent. We also show that for systems with large fluctuations in the error rate a combination of the two methods is optimal.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا