No Arabic abstract
Reducing measurement errors in multi-qubit quantum devices is critical for performing any quantum algorithm. Here we show how to mitigate measurement errors by a classical post-processing of the measured outcomes. Our techniques apply to any experiment where measurement outcomes are used for computing expected values of observables. Two error mitigation schemes are presented based on tensor product and correlated Markovian noise models. Error rates parameterizing these noise models can be extracted from the measurement calibration data using a simple formula. Error mitigation is achieved by applying the inverse noise matrix to a probability vector that represents the outcomes of a noisy measurement. The error mitigation overhead, including the the number of measurements and the cost of the classical post-processing, is exponential in $epsilon n$, where $epsilon$ is the maximum error rate and $n$ is the number of qubits. We report experimental demonstration of our error mitigation methods on IBM Quantum devices using stabilizer measurements for graph states with $nle 12$ qubits and entangled 20-qubit states generated by low-depth random Clifford circuits.
We introduce a Bayesian method for the estimation of single qubit errors in quantum devices, and use it to characterize these errors on two 27-qubit superconducting qubit devices. We selfconsistently estimate up to seven parameters of each qubits state preparation, readout, and gate errors, analyze the stability of these errors as a function of time, and demonstrate easily implemented approaches for mitigating different errors before a quantum computation experiment. On the investigated devices we find non-negligible qubit reset errors that cannot be parametrized as a diagonal mixed state, but manifest as a coherent phase of a superposition with a small contribution from the qubits excited state, which we are able to mitigate by applying pre-rotations on the initialized qubits. Our results demonstrate that Bayesian estimation can resolve small parameters - including those pertaining to quantum gate errors - with a high relative accuracy, at a lower measurement cost as compared with standard characterization approaches.
We consider multi-time correlators for output signals from linear detectors, continuously measuring several qubit observables at the same time. Using the quantum Bayesian formalism, we show that for unital (symmetric) evolution in the absence of phase backaction, an $N$-time correlator can be expressed as a product of two-time correlators when $N$ is even. For odd $N$, there is a similar factorization, which also includes a single-time average. Theoretical predictions agree well with experimental results for two detectors, which simultaneously measure non-commuting qubit observables.
In multi-qubit system, correlated errors subject to unwanted interactions with other qubits is one of the major obstacles for scaling up quantum computers to be applicable. We present two approaches to correct such noise and demonstrate with high fidelity and robustness. We use spectator and intruder to discriminate the environment interacting with target qubit in different parameter regime. Our proposed approaches combines analytical theory and numerical optimization, and are general to obtain smooth control pulses for various qubit systems. Both theory and numerical simulations demonstrate to correct these errors efficiently. Gate fidelities are generally above $0.9999$ over a large range of parameter variation for a set of single-qubit gates and two-qubit entangling gates. Comparison with well-known control waveform demonstrates the great advantage of our solutions.
Topological quantum error correction codes are known to be able to tolerate arbitrary local errors given sufficient qubits. This includes correlated errors involving many local qubits. In this work, we quantify this level of tolerance, numerically studying the effects of many-qubit errors on the performance of the surface code. We find that if increasingly large area errors are at least moderately exponentially suppressed, arbitrarily reliable quantum computation can still be achieved with practical overhead. We furthermore quantify the effect of non-local two-qubit correlated errors, which would be expected in arrays of qubits coupled by a polynomially decaying interaction, and when using many-qubit coupling devices. We surprisingly find that the surface code is very robust to this class of errors, despite a provable lack of a threshold error rate when such errors are present.
The major resolution-limiting factor in cryoelectron microscopy of unstained biological specimens is radiation damage by the very electrons that are used to probe the specimen structure. To address this problem, an electron microscopy scheme that employs quantum entanglement to enable phase measurement precision beyond the standard quantum limit has recently been proposed {[}Phys. Rev. A textbf{85}, 043810{]}. Here we identify and examine in detail measurement errors that will arise in the scheme. An emphasis is given to considerations concerning inelastic scattering events because in general schemes assisted with quantum entanglement are known to be highly vulnerable to lossy processes. We find that the amount of error due both to elastic and inelastic scattering processes are acceptable provided that the electron beam geometry is properly designed.