Do you want to publish a course? Click here

State injection, lattice surgery and dense packing of the deformation-based surface code

172   0   0.0 ( 0 )
 Added by Shota Nagayama
 Publication date 2016
  fields Physics
and research's language is English




Ask ChatGPT about the research

Resource consumption of the conventional surface code is expensive, in part due to the need to separate the defects that create the logical qubit far apart on the physical qubit lattice. We propose that instantiating the deformation-based surface code using superstabilizers makes it possible to detect short error chains connecting the superstabilizers, allowing us to place logical qubits close together. Additionally, we demonstrate the process of conversion from the defect-based surface code, which works as arbitrary state injection, and a lattice surgery-like CNOT gate implementation that requires fewer physical qubits than the braiding CNOT gate. Finally we propose a placement design for the deformation-based surface code and analyze its resource consumption; large scale quantum computation requires $frac{25}{4}d^2 +5d + 1$ physical qubits per logical qubit where $d$ is the code distance, whereas the planar code requires $16d^2 -16d + 4$ physical qubits per logical qubit, for a reduction of about 55%.

rate research

Read More

In recent years, surface codes have become a leading method for quantum error correction in theoretical large scale computational and communications architecture designs. Their comparatively high fault-tolerant thresholds and their natural 2-dimensional nearest neighbour (2DNN) structure make them an obvious choice for large scale designs in experimentally realistic systems. While fundamentally based on the toric code of Kitaev, there are many variants, two of which are the planar- and defect- based codes. Planar codes require fewer qubits to implement (for the same strength of error correction), but are restricted to encoding a single qubit of information. Interactions between encoded qubits are achieved via transversal operations, thus destroying the inherent 2DNN nature of the code. In this paper we introduce a new technique enabling the coupling of two planar codes without transversal operations, maintaining the 2DNN of the encoded computer. Our lattice surgery technique comprises splitting and merging planar code surfaces, and enables us to perform universal quantum computation (including magic state injection) while removing the need for braided logic in a strictly 2DNN design, and hence reduces the overall qubit resources for logic operations. Those resources are further reduced by the use of a rotated lattice for the planar encoding. We show how lattice surgery allows us to distribute encoded GHZ states in a more direct (and overhead friendly) manner, and how a demonstration of an encoded CNOT between two distance 3 logical states is possible with 53 physical qubits, half of that required in any other known construction in 2D.
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states |A>=(|0>+e^{ipi/4}|1>)/sqrt{2} produced a single improved |A> state given 15 input copies. New block code state distillation methods can produce k improved |A> states given 3k+8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
The yield of physical qubits fabricated in the laboratory is much lower than that of classical transistors in production semiconductor fabrication. Actual implementations of quantum computers will be susceptible to loss in the form of physically faulty qubits. Though these physical faults must negatively affect the computation, we can deal with them by adapting error correction schemes. In this paper We have simulated statically placed single-fault lattices and lattices with randomly placed faults at functional qubit yields of 80%, 90% and 95%, showing practical performance of a defective surface code by employing actual circuit constructions and realistic errors on every gate, including identity gates. We extend Stace et al.s superplaquettes solution against dynamic losses for the surface code to handle static losses such as physically faulty qubits. The single-fault analysis shows that a static loss at the periphery of the lattice has less negative effect than a static loss at the center. The randomly-faulty analysis shows that 95% yield is good enough to build a large scale quantum computer. The local gate error rate threshold is $sim 0.3%$, and a code distance of seven suppresses the residual error rate below the original error rate at $p=0.1%$. 90% yield is also good enough when we discard badly fabricated quantum computation chips, while 80% yield does not show enough error suppression even when discarding 90% of the chips. We evaluated several metrics for predicting chip performance, and found that the average of the product of the number of data qubits and the cycle time of a stabilizer measurement of stabilizers gave the strongest correlation with post-correction residual error rates. Our analysis will help with selecting usable quantum computation chips from among the pool of all fabricated chips.
The surface code is a prominent topological error-correcting code exhibiting high fault-tolerance accuracy thresholds. Conventional schemes for error correction with the surface code place qubits on a planar grid and assume native CNOT gates between the data qubits with nearest-neighbor ancilla qubits. Here, we present surface code error-correction schemes using $textit{only}$ Pauli measurements on single qubits and on pairs of nearest-neighbor qubits. In particular, we provide several qubit layouts that offer favorable trade-offs between qubit overhead, circuit depth and connectivity degree. We also develop minimized measurement sequences for syndrome extraction, enabling reduced logical error rates and improved fault-tolerance thresholds. Our work applies to topologically protected qubits realized with Majorana zero modes and to similar systems in which multi-qubit Pauli measurements rather than CNOT gates are the native operations.
We consider a notion of relative homology (and cohomology) for surfaces with two types of boundaries. Using this tool, we study a generalization of Kitaevs code based on surfaces with mixed boundaries. This construction includes both Bravyi and Kitaevs and Freedman and Meyers extension of Kitaevs toric code. We argue that our generalization offers a denser storage of quantum information. In a planar architecture, we obtain a three-fold overhead reduction over the standard architecture consisting of a punctured square lattice.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا