ﻻ يوجد ملخص باللغة العربية
We describe a scalable experimental protocol for obtaining estimates of the error rate of individual quantum computational gates. This protocol, in which random Clifford gates are interleaved between a gate of interest, provides a bounded estimate of the average error of the gate under test so long as the average variation of the noise affecting the full set of Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find gate errors that compare favorably with the gate errors extracted via quantum process tomography.
Hardware efficient transpilation of quantum circuits to a quantum devices native gateset is essential for the execution of quantum algorithms on noisy quantum computers. Typical quantum devices utilize a gateset with a single two-qubit Clifford entan
Typical quantum gate tomography protocols struggle with a self-consistency problem: the gate operation cannot be reconstructed without knowledge of the initial state and final measurement, but such knowledge cannot be obtained without well-characteri
Building upon the demonstration of coherent control and single-shot readout of the electron and nuclear spins of individual 31-P atoms in silicon, we present here a systematic experimental estimate of quantum gate fidelities using randomized benchmar
A key requirement for scalable quantum computing is that elementary quantum gates can be implemented with sufficiently low error. One method for determining the error behavior of a gate implementation is to perform process tomography. However, standa
We theoretically consider a cross-resonance (CR) gate implemented by pulse sequences proposed by Calderon-Vargas & Kestner, Phys. Rev. Lett. 118, 150502 (2017). These sequences mitigate systematic error to first order, but their effectiveness is limi