ترغب بنشر مسار تعليمي؟ اضغط هنا

The typical model for measurement noise in quantum error correction is to randomly flip the binary measurement outcome. In experiments, measurements yield much richer information - e.g., continuous current values, discrete photon counts - which is th en mapped into binary outcomes by discarding some of this information. In this work, we consider methods to incorporate all of this richer information, typically called soft information, into the decoding of quantum error correction codes, and in particular the surface code. We describe how to modify both the Minimum Weight Perfect Matching and Union-Find decoders to leverage soft information, and demonstrate these soft decoders outperform the standard (hard) decoders that can only access the binary measurement outcomes. Moreover, we observe that the soft decoder achieves a threshold 25% higher than any hard decoder for phenomenological noise with Gaussian soft measurement outcomes. We also introduce a soft measurement error model with amplitude damping, in which measurement time leads to a trade-off between measurement resolution and additional disturbance of the qubits. Under this model we observe that the performance of the surface code is very sensitive to the choice of the measurement time - for a distance-19 surface code, a five-fold increase in measurement time can lead to a thousand-fold increase in logical error rate. Moreover, the measurement time that minimizes the physical error rate is distinct from the one that minimizes the logical performance, pointing to the benefits of jointly optimizing the physical and quantum error correction layers.
Quantum computation promises significant computational advantages over classical computation for some problems. However, quantum hardware suffers from much higher error rates than in classical hardware. As a result, extensive quantum error correction is required to execute a useful quantum algorithm. The decoder is a key component of the error correction scheme whose role is to identify errors faster than they accumulate in the quantum computer and that must be implemented with minimum hardware resources in order to scale to the regime of practical applications. In this work, we consider surface code error correction, which is the most popular family of error correcting codes for quantum computing, and we design a decoder micro-architecture for the Union-Find decoding algorithm. We propose a three-stage fully pipelined hardware implementation of the decoder that significantly speeds up the decoder. Then, we optimize the amount of decoding hardware required to perform error correction simultaneously over all the logical qubits of the quantum computer. By sharing resources between logical qubits, we obtain a 67% reduction of the number of hardware units and the memory capacity is reduced by 70%. Moreover, we reduce the bandwidth required for the decoding process by a factor at least 30x using low-overhead compression algorithms. Finally, we provide numerical evidence that our optimized micro-architecture can be executed fast enough to correct errors in a quantum computer.
We propose the Wishart planted ensemble, a class of zero-field Ising models with tunable algorithmic hardness and specifiable (or planted) ground state. The problem class arises from a simple procedure for generating a family of random integer progra mming problems with specific statistical symmetry properties, but turns out to have intimate connections to a sign-inverted variant of the Hopfield model. The Hamiltonian contains only 2-spin interactions, with the coupler matrix following a type of Wishart distribution. The class exhibits a classical first-order phase transition in temperature. For some parameter settings the model has a locally-stable paramagnetic state, a feature which correlates strongly with difficulty in finding the ground state and suggests an extremely rugged energy landscape. We analytically probe the ensemble thermodynamic properties by deriving the Thouless-Anderson-Palmer equations and free energy and corroborate the results with a replica and annealed approximation analysis; extensive Monte Carlo simulations confirm our predictions of the first-order transition temperature. The class exhibits a wide variation in algorithmic hardness as a generation parameter is varied, with a pronounced easy-hard-easy profile and peak in solution time towering many orders of magnitude over that of the easy regimes. By deriving the ensemble-averaged energy distribution and taking into account finite-precision representation, we propose an analytical expression for the location of the hardness peak and show that at fixed precision, the number of constraints in the integer program must increase with system size to yield truly hard problems. The Wishart planted ensemble is interesting for its peculiar physical properties and provides a useful and analytically-transparent set of problems for benchmarking optimization algorithms.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا