ترغب بنشر مسار تعليمي؟ اضغط هنا

Latency in local, two-dimensional, fault-tolerant quantum computing

163   0   0.0 ( 0 )
 نشر من قبل Federico Spedalieri
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We analyze the latency of fault-tolerant quantum computing based on the 9-qubit Bacon-Shor code using a local, two-dimensional architecture. We embed the data qubits in a 7 by 7 array of physical qubits, where the extra qubits are used for ancilla preparation and qubit transportation by means of a SWAP chain. The latency is reduced with respect to a similar implementation using Steanes 7-qubit code (K. M. Svore, D. P. DiVincenzo, and B. M. Terhal, Quantum Information & Computation {bf 7}, 297 (2007)). Furthermore, the error threshold is also improved to $2.02 times 10^{-5}$, when memory errors are taken to be one tenth of the gate error rates.



قيم البحث

اقرأ أيضاً

Photonic quantum computing is one of the leading approaches to universal quantum computation. However, large-scale implementation of photonic quantum computing has been hindered by its intrinsic difficulties, such as probabilistic entangling gates fo r photonic qubits and lack of scalable ways to build photonic circuits. Here we discuss how to overcome these limitations by taking advantage of two key ideas which have recently emerged. One is a hybrid qubit-continuous variable approach for realizing a deterministic universal gate set for photonic qubits. The other is time-domain multiplexing technique to perform arbitrarily large-scale quantum computing without changing the configuration of photonic circuits. These ideas together will enable scalable implementation of universal photonic quantum computers in which hardware-efficient error correcting codes can be incorporated. Furthermore, all-optical implementation of such systems can increase the operational bandwidth beyond THz in principle, utimately enabling large-scale fault-tolerant universal quantum computers with ultra-high operation frequency.
Quantum computation promises significant computational advantages over classical computation for some problems. However, quantum hardware suffers from much higher error rates than in classical hardware. As a result, extensive quantum error correction is required to execute a useful quantum algorithm. The decoder is a key component of the error correction scheme whose role is to identify errors faster than they accumulate in the quantum computer and that must be implemented with minimum hardware resources in order to scale to the regime of practical applications. In this work, we consider surface code error correction, which is the most popular family of error correcting codes for quantum computing, and we design a decoder micro-architecture for the Union-Find decoding algorithm. We propose a three-stage fully pipelined hardware implementation of the decoder that significantly speeds up the decoder. Then, we optimize the amount of decoding hardware required to perform error correction simultaneously over all the logical qubits of the quantum computer. By sharing resources between logical qubits, we obtain a 67% reduction of the number of hardware units and the memory capacity is reduced by 70%. Moreover, we reduce the bandwidth required for the decoding process by a factor at least 30x using low-overhead compression algorithms. Finally, we provide numerical evidence that our optimized micro-architecture can be executed fast enough to correct errors in a quantum computer.
Considering the large-scale quantum computer, it is important to know how much quantum computational resources is necessary precisely and quickly. Unfortunately the previous methods so far cannot support a large-scale quantum computing practically an d therefore the analysis because they usually use a non-structured code. To overcome this problem, we propose a fast mapping by using the hierarchical assembly code which is much more compact than the non-structured code. During the mapping process, the necessary modules and their interconnection can be dynamically mapped by using the communication bus at the cost of additional qubits. In our study, the proposed method works very fast such as 1 hour than 1500 days for Shor algorithm to factorize 512-bit integer. Meanwhile, since the hierarchical assembly code has high degree of locality, it has shorter SWAP chains and hence it does not increase the quantum computation time than expected.
A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensure s that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.
Quantum error correction (QEC) is an essential step towards realising scalable quantum computers. Theoretically, it is possible to achieve arbitrarily long protection of quantum information from corruption due to decoherence or imperfect controls, so long as the error rate is below a threshold value. The two-dimensional surface code (SC) is a fault-tolerant error correction protocol} that has garnered considerable attention for actual physical implementations, due to relatively high error thresholds ~1%, and restriction to planar lattices with nearest-neighbour interactions. Here we show a necessary element for SC error correction: high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. The experiment is performed on a sub-section of the SC lattice with three superconducting transmon qubits, in which two independent outer code qubits are joined to a central syndrome qubit via two linking bus resonators. With all-microwave high-fidelity single- and two-qubit nearest-neighbour entangling gates, we demonstrate entanglement distributed across the entire sub-section by generating a three-qubit Greenberger-Horne-Zeilinger (GHZ) state with fidelity ~94%. Then, via high-fidelity measurement of the syndrome qubit, we deterministically entangle the otherwise un-coupled outer code qubits, in either an even or odd parity Bell state, conditioned on the syndrome state. Finally, to fully characterize this parity readout, we develop a new measurement tomography protocol to obtain a fidelity metric (90% and 91%). Our results reveal a straightforward path for expanding superconducting circuits towards larger networks for the SC and eventually a primitive logical qubit implementation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا