No Arabic abstract
Quantum computers have the potential to help solve a range of physics and chemistry problems, but noise in quantum hardware currently limits our ability to obtain accurate results from the execution of quantum-simulation algorithms. Various methods have been proposed to mitigate the impact of noise on variational algorithms, including several that model the noise as damping expectation values of observables. In this work, we benchmark various methods, including two new methods proposed here, for estimating the damping factor and hence recovering the noise-free expectation values. We compare their performance in estimating the ground-state energies of several instances of the 1D mixed-field Ising model using the variational-quantum-eigensolver algorithm with up to 20 qubits on two of IBMs quantum computers. We find that several error-mitigation techniques allow us to recover energies to within 10% of the true values for circuits containing up to about 25 ansatz layers, where each layer consists of CNOT gates between all neighboring qubits and Y-rotations on all qubits.
Variational Quantum Algorithms (VQAs) are a promising application for near-term quantum processors, however the quality of their results is greatly limited by noise. For this reason, various error mitigation techniques have emerged to deal with noise that can be applied to these algorithms. Recent work introduced a technique for mitigating expectation values against correlated measurement errors that can be applied to measurements of 10s of qubits. We apply these techniques to VQAs and demonstrate its effectiveness in improving estimates to the cost function. Moreover, we use the data resulting from this technique to experimentally characterize measurement errors in terms of the device connectivity on devices of up to 20 qubits. These results should be useful for better understanding the near-term potential of VQAs as well as understanding the correlations in measurement errors on large, near-term devices.
Even with the recent rapid developments in quantum hardware, noise remains the biggest challenge for the practical applications of any near-term quantum devices. Full quantum error correction cannot be implemented in these devices due to their limited scale. Therefore instead of relying on engineered code symmetry, symmetry verification was developed which uses the inherent symmetry within the physical problem we try to solve. In this article, we develop a general framework named symmetry expansion which provides a wide spectrum of symmetry-based error mitigation schemes beyond symmetry verification, enabling us to achieve different balances between the estimation bias and the sampling cost of the scheme. We show that certain symmetry expansion schemes can achieve a smaller estimation bias than symmetry verification through cancellation between the biases due to the detectable and undetectable noise components. A practical way to search for such a small-bias scheme is introduced. By numerically simulating the Fermi-Hubbard model for energy estimation, the small-bias symmetry expansion we found can achieve an estimation bias 6 to 9 times below what is achievable by symmetry verification when the average number of circuit errors is between 1 to 2. The corresponding sampling cost for random shot noise reduction is just 2 to 6 times higher than symmetry verification. Beyond symmetries inherent to the physical problem, our formalism is also applicable to engineered symmetries. For example, the recent scheme for exponential error suppression using multiple noisy copies of the quantum device is just a special case of symmetry expansion using the permutation symmetry among the copies.
The Eastin-Knill theorem states that no quantum error correcting code can have a universal set of transversal gates. For self-dual CSS codes that can implement Clifford gates transversally it suffices to provide one additional non-Clifford gate, such as the $T$-gate, to achieve universality. Common methods to implement fault-tolerant $T$-gates like magic state distillation generate a significant hardware overhead that will likely prevent their practical usage in the near-term future. Recently methods have been developed to mitigate the effect of noise in shallow quantum circuits that are not protected by error correction. Error mitigation methods require no additional hardware resources but suffer from a bad asymptotic scaling and apply only to a restricted class of quantum algorithms. In this work, we combine both approaches and show how to implement encoded Clifford+$T$ circuits where Clifford gates are protected from noise by error correction while errors introduced by noisy encoded $T$-gates are mitigated using the quasi-probability method. As a result, Clifford+$T$ circuits with a number of $T$-gates inversely proportional to the physical noise rate can be implemented on small error-corrected devices without magic state distillation. We argue that such circuits can be out of reach for state-of-the-art classical simulation algorithms.
Quantum error mitigation techniques are at the heart of quantum hardware implementation, and are the key to performance improvement of the variational quantum learning scheme (VQLS). Although VQLS is partially robust to noise, both empirical and theoretical results exhibit that noise would rapidly deteriorate the performance of most variational quantum algorithms in large-scale problems. Furthermore, VQLS suffers from the barren plateau phenomenon---the gradient generated by the classical optimizer vanishes exponentially with respect to the qubit number. Here we devise a resource and runtime efficient scheme, the quantum architecture search scheme (QAS), to maximally improve the robustness and trainability of VQLS. In particular, given a learning task, QAS actively seeks an optimal circuit architecture to balance benefits and side-effects brought by adding more quantum gates. Specifically, while more quantum gates enable a stronger expressive power of the quantum model, they introduce a larger amount of noise and a more serious barren plateau scenario. Consequently, QAS can effectively suppress the influence of quantum noise and barren plateaus. We implement QAS on both the numerical simulator and real quantum hardware, via the IBM cloud, to accomplish data classification and quantum chemistry tasks. Numerical and experimental results show that QAS significantly outperforms conventional variational quantum algorithms with heuristic circuit architectures. Our work provides practical guidance for developing advanced learning-based quantum error mitigation techniques on near-term quantum devices.
Variational Quantum Algorithms (VQAs) are widely viewed as the best hope for near-term quantum advantage. However, recent studies have shown that noise can severely limit the trainability of VQAs, e.g., by exponentially flattening the cost landscape and suppressing the magnitudes of cost gradients. Error Mitigation (EM) shows promise in reducing the impact of noise on near-term devices. Thus, it is natural to ask whether EM can improve the trainability of VQAs. In this work, we first show that, for a broad class of EM strategies, exponential cost concentration cannot be resolved without committing exponential resources elsewhere. This class of strategies includes as special cases Zero Noise Extrapolation, Virtual Distillation, Probabilistic Error Cancellation, and Clifford Data Regression. Second, we perform analytical and numerical analysis of these EM protocols, and we find that some of them (e.g., Virtual Distillation) can make it harder to resolve cost function values compared to running no EM at all. As a positive result, we do find numerical evidence that Clifford Data Regression (CDR) can aid the training process in certain settings where cost concentration is not too severe. Our results show that care should be taken in applying EM protocols as they can either worsen or not improve trainability. On the other hand, our positive results for CDR highlight the possibility of engineering error mitigation methods to improve trainability.