Do you want to publish a course? Click here

Improved HDRG decoders for qudit and non-Abelian quantum error correction

132   0   0.0 ( 0 )
 Added by Adrian Hutter
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strenghts of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size $L$ from $Theta(L^{2/3})$ to $Omega(L^{1-epsilon})$ for any $epsilon>0$. We apply our algorithm to decoding $D(mathbb{Z}_d)$ quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the $D(mathbb{Z}_d)$ quantum double models. The parallelized runtime of our algorithm is $text{poly}(log L)$ for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is $O(1)$ for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.



rate research

Read More

We consider a class of decoding algorithms that are applicable to error correction for both Abelian and non-Abelian anyons. This class includes multiple algorithms that have recently attracted attention, including the Bravyi-Haah RG decoder. They are applied to both the problem of single shot error correction (with perfect syndrome measurements) and that of active error correction (with noisy syndrome measurements). For Abelian models we provide a threshold proof in both cases, showing that there is a finite noise threshold under which errors can be arbitrarily suppressed when any decoder in this class is used. For non-Abelian models such a proof is found for the single shot case. The means by which decoding may be performed for active error correction of non-Abelian anyons is studied in detail. Differences with the Abelian case are discussed.
The typical model for measurement noise in quantum error correction is to randomly flip the binary measurement outcome. In experiments, measurements yield much richer information - e.g., continuous current values, discrete photon counts - which is then mapped into binary outcomes by discarding some of this information. In this work, we consider methods to incorporate all of this richer information, typically called soft information, into the decoding of quantum error correction codes, and in particular the surface code. We describe how to modify both the Minimum Weight Perfect Matching and Union-Find decoders to leverage soft information, and demonstrate these soft decoders outperform the standard (hard) decoders that can only access the binary measurement outcomes. Moreover, we observe that the soft decoder achieves a threshold 25% higher than any hard decoder for phenomenological noise with Gaussian soft measurement outcomes. We also introduce a soft measurement error model with amplitude damping, in which measurement time leads to a trade-off between measurement resolution and additional disturbance of the qubits. Under this model we observe that the performance of the surface code is very sensitive to the choice of the measurement time - for a distance-19 surface code, a five-fold increase in measurement time can lead to a thousand-fold increase in logical error rate. Moreover, the measurement time that minimizes the physical error rate is distinct from the one that minimizes the logical performance, pointing to the benefits of jointly optimizing the physical and quantum error correction layers.
Consider a stabilizer state on $n$ qudits, each of dimension $D$ with $D$ being a prime or a squarefree integer, divided into three mutually disjoint sets or parts. Generalizing a result of Bravyi et al. [J. Math. Phys. textbf{47}, 062106 (2006)] for qubits (D=2), we show that up to local unitaries on the three parts the state can be written as a tensor product of unentangled single-qudit states, maximally entangled EPR pairs, and tripartite GHZ states. We employ this result to obtain a complete characterization of the properties of a class of channels associated with stabilizer error-correcting codes, along with their complementary channels.
333 - Ye-Hua Liu , David Poulin 2018
Belief-propagation (BP) decoders play a vital role in modern coding theory, but they are not suitable to decode quantum error-correcting codes because of a unique quantum feature called error degeneracy. Inspired by an exact mapping between BP and deep neural networks, we train neural BP decoders for quantum low-density parity-check (LDPC) codes with a loss function tailored to error degeneracy. Training substantially improves the performance of BP decoders for all families of codes we tested and may solve the degeneracy problem which plagues the decoding of quantum LDPC codes.
The efficient validation of quantum devices is critical for emerging technological applications. In a wide class of use-cases the precise engineering of a Hamiltonian is required both for the implementation of gate-based quantum information processing as well as for reliable quantum memories. Inferring the experimentally realized Hamiltonian through a scalable number of measurements constitutes the challenging task of Hamiltonian learning. In particular, assessing the quality of the implementation of topological codes is essential for quantum error correction. Here, we introduce a neural net based approach to this challenge. We capitalize on a family of exactly solvable models to train our algorithm and generalize to a broad class of experimentally relevant sources of errors. We discuss how our algorithm scales with system size and analyze its resilience towards various noise sources.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا