We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two dimensional toric code with phase-flip errors.
Quantum LDPC codes are a promising direction for low overhead quantum computing. In this paper, we propose a generalization of the Union-Find decoder as adecoder for quantum LDPC codes. We prove that this decoder corrects all errors with weight up to An^{alpha} for some A, {alpha} > 0 for different classes of quantum LDPC codes such as toric codes and hyperbolic codes in any dimension D geq 3 and quantum expander codes. To prove this result, we introduce a notion of covering radius which measures the spread of an error from its syndrome. We believe this notion could find application beyond the decoding problem. We also perform numerical simulations, which show that our Union-Find decoder outperforms the belief propagation decoder in the low error rate regime in the case of a quantum LDPC code with length 3600.
Topological quantum error-correcting codes are a promising candidate for building fault-tolerant quantum computers. Decoding topological codes optimally, however, is known to be a computationally hard problem. Various decoders have been proposed that achieve approximately optimal error thresholds. Due to practical constraints, it is not known if there exists an obvious choice for a decoder. In this paper, we introduce a framework which can combine arbitrary decoders for any given code to significantly reduce the logical error rates. We rely on the crucial observation that two different decoding techniques, while possibly having similar logical error rates, can perform differently on the same error syndrome. We use machine learning techniques to assign a given error syndrome to the decoder which is likely to decode it correctly. We apply our framework to an ensemble of Minimum-Weight Perfect Matching (MWPM) and Hard-Decision Re-normalization Group (HDRG) decoders for the surface code in the depolarizing noise model. Our simulations show an improvement of 38.4%, 14.6%, and 7.1% over the pseudo-threshold of MWPM in the instance of distance 5, 7, and 9 codes, respectively. Lastly, we discuss the advantages and limitations of our framework and applicability to other error-correcting codes. Our framework can provide a significant boost to error correction by combining the strengths of various decoders. In particular, it may allow for combining very fast decoders with moderate error-correcting capability to create a very fast ensemble decoder with high error-correcting capability.
Linear Programming (LP) is an important decoding technique for binary linear codes. However, the advantages of LP decoding, such as low error floor and strong theoretical guarantee, etc., come at the cost of high computational complexity and poor performance at the low signal-to-noise ratio (SNR) region. In this letter, we adopt the penalty dual decomposition (PDD) framework and propose a PDD algorithm to address the fundamental polytope based maximum likelihood (ML) decoding problem. Furthermore, we propose to integrate machine learning techniques into the most time-consuming part of the PDD decoding algorithm, i.e., check polytope projection (CPP). Inspired by the fact that a multi-layer perception (MLP) can theoretically approximate any nonlinear mapping function, we present a specially designed neural CPP (NCPP) algorithm to decrease the decoding latency. Simulation results demonstrate the effectiveness of the proposed algorithms.
We propose a neural-network variational quantum algorithm to simulate the time evolution of quantum many-body systems. Based on a modified restricted Boltzmann machine (RBM) wavefunction ansatz, the proposed algorithm can be efficiently implemented in near-term quantum computers with low measurement cost. Using a qubit recycling strategy, only one ancilla qubit is required to represent all the hidden spins in an RBM architecture. The variational algorithm is extended to open quantum systems by employing a stochastic Schrodinger equation approach. Numerical simulations of spin-lattice models demonstrate that our algorithm is capable of capturing the dynamics of closed and open quantum many-body systems with high accuracy without suffering from the vanishing gradient (or barren plateau) issue for the considered system sizes.
We study stabilizer quantum error correcting codes (QECC) generated under hybrid dynamics of local Clifford unitaries and local Pauli measurements in one dimension. Building upon 1) a general formula relating the error-susceptibility of a subregion to its entanglement properties, and 2) a previously established mapping between entanglement entropies and domain wall free energies of an underlying spin model, we propose a statistical mechanical description of the QECC in terms of entanglement domain walls. Free energies of such domain walls generically feature a leading volume law term coming from its surface energy, and a sub-volume law correction coming from thermodynamic entropies of its transverse fluctuations. These are most easily accounted for by capillary-wave theory of liquid-gas interfaces, which we use as an illustrative tool. We show that the information-theoretic decoupling criterion corresponds to a geometric decoupling of domain walls, which further leads to the identification of the contiguous code distance of the QECC as the crossover length scale at which the energy and entropy of the domain wall are comparable. The contiguous code distance thus diverges with the system size as the subleading entropic term of the free energy, protecting a finite code rate against local undetectable errors. We support these correspondences with numerical evidence, where we find capillary-wave theory describes many qualitative features of the QECC; we also discuss when and why it fails to do so.