Do you want to publish a course? Click here

The cost of universality: A comparative study of the overhead of state distillation and code switching with color codes

51   0   0.0 ( 0 )
 Added by Michael Beverland E
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Estimating and reducing the overhead of fault tolerance (FT) schemes is a crucial step toward realizing scalable quantum computers. Of particular interest are schemes based on two-dimensional (2D) topological codes such as the surface and color codes which have high thresholds but lack a natural implementation of a non-Clifford gate. In this work, we directly compare two leading FT implementations of the T gate in 2D color codes under circuit noise across a wide range of parameters in regimes of practical interest. We report that implementing the T gate via code switching to a 3D color code does not offer substantial savings over state distillation in terms of either space or space-time overhead. We find a circuit noise threshold of 0.07(1)% for the T gate via code switching, almost an order of magnitude below that achievable by state distillation in the same setting. To arrive at these results, we provide and simulate an optimized code switching procedure, and bound the effect of various conceivable improvements. Many intermediate results in our analysis may be of independent interest. For example, we optimize the 2D color code for circuit noise yielding its largest threshold to date 0.37(1)%, and adapt and optimize the restriction decoder finding a threshold of 0.80(5)% for the 3D color code with perfect measurements under Z noise. Our work provides a much-needed direct comparison of the overhead of state distillation and code switching, and sheds light on the choice of future FT schemes and hardware designs.



rate research

Read More

State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states |A>=(|0>+e^{ipi/4}|1>)/sqrt{2} produced a single improved |A> state given 15 input copies. New block code state distillation methods can produce k improved |A> states given 3k+8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
We present an infinite family of protocols to distill magic states for $T$-gates that has a low space overhead and uses an asymptotic number of input magic states to achieve a given target error that is conjectured to be optimal. The space overhead, defined as the ratio between the physical qubits to the number of output magic states, is asymptotically constant, while both the number of input magic states used per output state and the $T$-gate depth of the circuit scale linearly in the logarithm of the target error $delta$ (up to $log log 1/delta$). Unlike other distillation protocols, this protocol achieves this performance without concatenation and the input magic states are injected at various steps in the circuit rather than all at the start of the circuit. The protocol can be modified to distill magic states for other gates at the third level of the Clifford hierarchy, with the same asymptotic performance. The protocol relies on the construction of weakly self-dual CSS codes with many logical qubits and large distance, allowing us to implement control-SWAPs on multiple qubits. We call this code the inner code. The control-SWAPs are then used to measure properties of the magic state and detect errors, using another code that we call the outer code. Alternatively, we use weakly-self dual CSS codes which implement controlled Hadamards for the inner code, reducing circuit depth. We present several specific small examples of this protocol.
The Gottesman-Kitaev-Preskill (GKP) quantum error-correcting code has emerged as a key technique in achieving fault-tolerant quantum computation using photonic systems. Whereas [Baragiola et al., Phys. Rev. Lett. 123, 200502 (2019)] showed that experimentally tractable Gaussian operations combined with preparing a GKP codeword $lvert 0rangle$ suffice to implement universal quantum computation, this implementation scheme involves a distillation of a logical magic state $lvert Hrangle$ of the GKP code, which inevitably imposes a trade-off between implementation cost and fidelity. In contrast, we propose a scheme of preparing $lvert Hrangle$ directly and combining Gaussian operations only with $lvert Hrangle$ to achieve the universality without this magic state distillation. In addition, we develop an analytical method to obtain bounds of fundamental limit on transformation between $lvert Hrangle$ and $lvert 0rangle$, finding an application of quantum resource theories to cost analysis of quantum computation with the GKP code. Our results lead to an essential reduction of required non-Gaussian resources for photonic fault-tolerant quantum computation compared to the previous scheme.
Fault-tolerant quantum error correction is essential for implementing quantum algorithms of significant practical importance. In this work, we propose a highly effective use of the surface-GKP code, i.e., the surface code consisting of bosonic GKP qubits instead of bare two-dimensional qubits. In our proposal, we use error-corrected two-qubit gates between GKP qubits and introduce a maximum likelihood decoding strategy for correcting shift errors in the two-GKP-qubit gates. Our proposed decoding reduces the total CNOT failure rate of the GKP qubits, e.g., from $0.87%$ to $0.36%$ at a GKP squeezing of $12$dB, compared to the case where the simple closest-integer decoding is used. Then, by concatenating the GKP code with the surface code, we find that the threshold GKP squeezing is given by $9.9$dB under the the assumption that finite-squeezing of the GKP states is the dominant noise source. More importantly, we show that a low logical failure rate $p_{L} < 10^{-7}$ can be achieved with moderate hardware requirements, e.g., $291$ modes and $97$ qubits at a GKP squeezing of $12$dB as opposed to $1457$ bare qubits for the standard rotated surface code at an equivalent noise level (i.e., $p=0.36%$). Such a low failure rate of our surface-GKP code is possible through the use of space-time correlated edges in the matching graphs of the surface code decoder. Further, all edge weights in the matching graphs are computed dynamically based on analog information from the GKP error correction using the full history of all syndrome measurement rounds. We also show that a highly-squeezed GKP state of GKP squeezing $gtrsim 12$dB can be experimentally realized by using a dissipative stabilization method, namely, the Big-small-Big method, with fairly conservative experimental parameters. Lastly, we introduce a three-level ancilla scheme to mitigate ancilla decay errors during a GKP state preparation.
Quantum computers capable of solving classically intractable problems are under construction, and intermediate-scale devices are approaching completion. Current efforts to design large-scale devices require allocating immense resources to error correction, with the majority dedicated to the production of high-fidelity ancillary states known as magic-states. Leading techniques focus on dedicating a large, contiguous region of the processor as a single magic-state distillation factory responsible for meeting the magic-state demands of applications. In this work we design and analyze a set of optimized factory architectural layouts that divide a single factory into spatially distributed factories located throughout the processor. We find that distributed factory architectures minimize the space-time volume overhead imposed by distillation. Additionally, we find that the number of distributed components in each optimal configuration is sensitive to application characteristics and underlying physical device error rates. More specifically, we find that the rate at which T-gates are demanded by an application has a significant impact on the optimal distillation architecture. We develop an optimization procedure that discovers the optimal number of factory distillation rounds and number of output magic states per factory, as well as an overall system architecture that interacts with the factories. This yields between a 10x and 20x resource reduction compared to commonly accepted single factory designs. Performance is analyzed across representative application classes such as quantum simulation and quantum chemistry.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا