ترغب بنشر مسار تعليمي؟ اضغط هنا

A Fault-Tolerant Honeycomb Memory

68   0   0.0 ( 0 )
 نشر من قبل Craig Gidney
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, Hastings & Haah introduced a quantum memory defined on the honeycomb lattice. Remarkably, this honeycomb code assembles weight-six parity checks using only two-local measurements. The sparse connectivity and two-local measurements are desirable features for certain hardware, while the weight-six parity checks enable robust performance in the circuit model. In this work, we quantify the robustness of logical qubits preserved by the honeycomb code using a correlated minimum-weight perfect-matching decoder. Using Monte Carlo sampling, we estimate the honeycomb codes threshold in different error models, and project how efficiently it can reach the teraquop regime where trillions of quantum logical operations can be executed reliably. We perform the same estimates for the rotated surface code, and find a threshold of $0.2%-0.3%$ for the honeycomb code compared to a threshold of $0.5%-0.7%$ for the surface code in a controlled-not circuit model. In a circuit model with native two-body measurements, the honeycomb code achieves a threshold of $1.5% < p <2.0%$, where $p$ is the collective error rate of the two-body measurement gate - including both measurement and correlated data depolarization error processes. With such gates at a physical error rate of $10^{-3}$, we project that the honeycomb code can reach the teraquop regime with only $600$ physical qubits.

قيم البحث

اقرأ أيضاً

We explain how to combine holonomic quantum computation (HQC) with fault tolerant quantum error correction. This establishes the scalability of HQC, putting it on equal footing with other models of computation, while retaining the inherent robustness the method derives from its geometric nature.
Designing encoding and decoding circuits to reliably send messages over many uses of a noisy channel is a central problem in communication theory. When studying the optimal transmission rates achievable with asymptotically vanishing error it is usual ly assumed that these circuits can be implemented using noise-free gates. While this assumption is satisfied for classical machines in many scenarios, it is not expected to be satisfied in the near term future for quantum machines where decoherence leads to faults in the quantum gates. As a result, fundamental questions regarding the practical relevance of quantum channel coding remain open. By combining techniques from fault-tolerant quantum computation with techniques from quantum communication, we initiate the study of these questions. We introduce fault-tolera
150 - John M. Martinis 2015
Recent progress in quantum information has led to the start of several large national and industrial efforts to build a quantum computer. Researchers are now working to overcome many scientific and technological challenges. The programs biggest obsta cle, a potential showstopper for the entire effort, is the need for high-fidelity qubit operations in a scalable architecture. This challenge arises from the fundamental fragility of quantum information, which can only be overcome with quantum error correction. In a fault-tolerant quantum computer the qubits and their logic interactions must have errors below a threshold: scaling up with more and more qubits then brings the net error probability down to appropriate levels ~ $10^{-18}$ needed for running complex algorithms. Reducing error requires solving problems in physics, control, materials and fabrication, which differ for every implementation. I explain here the common key driver for continued improvement - the metrology of qubit errors.
With gate error rates in multiple technologies now below the threshold required for fault-tolerant quantum computation, the major remaining obstacle to useful quantum computation is scaling, a challenge greatly amplified by the huge overhead imposed by quantum error correction itself. We propose a fault-tolerant quantum computing scheme that can nonetheless be assembled from a small number of experimental components, potentially dramatically reducing the engineering challenges associated with building a large-scale fault-tolerant quantum computer. Our scheme has a threshold of 0.39% for depolarising noise, assuming that memory errors are negligible. In the presence of memory errors, the logical error rate decays exponentially with $sqrt{T/tau}$, where $T$ is the memory coherence time and $tau$ is the timescale for elementary gates. Our approach is based on a novel procedure for fault-tolerantly preparing three-dimensional cluster states using a single actively controlled qubit and a pair of delay lines. Although a circuit-level error may propagate to a high-weight error, the effect of this error on the prepared state is always equivalent to that of a constant-weight error. We describe how the requisite gates can be implemented using existing technologies in quantum photonic and phononic systems. With continued improvements in only a few components, we expect these systems to be promising candidates for demonstrating fault-tolerant quantum computation with a comparatively modest experimental effort.
Photonics is the platform of choice to build a modular, easy-to-network quantum computer operating at room temperature. However, no concrete architecture has been presented so far that exploits both the advantages of qubits encoded into states of lig ht and the modern tools for their generation. Here we propose such a design for a scalable and fault-tolerant photonic quantum computer informed by the latest developments in theory and technology. Central to our architecture is the generation and manipulation of three-dimensional hybrid resource states comprising both bosonic qubits and squeezed vacuum states. The proposal enables exploiting state-of-the-art procedures for the non-deterministic generation of bosonic qubits combined with the strengths of continuous-variable quantum computation, namely the implementation of Clifford gates using easy-to-generate squeezed states. Moreover, the architecture is based on two-dimensional integrated photonic chips used to produce a qubit cluster state in one temporal and two spatial dimensions. By reducing the experimental challenges as compared to existing architectures and by enabling room-temperature quantum computation, our design opens the door to scalable fabrication and operation, which may allow photonics to leap-frog other platforms on the path to a quantum computer with millions of qubits.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا