Do you want to publish a course? Click here

Randomized benchmarking for individual quantum gates

84   0   0.0 ( 0 )
 Added by Jens Eisert
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Any technology requires precise benchmarking of its components, and the quantum technologies are no exception. Randomized benchmarking allows for the relatively resource economical estimation of the average gate fidelity of quantum gates from the Clifford group, assuming identical noise levels for all gates, making use of suitable sequences of randomly chosen Clifford gates. In this work, we report significant progress on randomized benchmarking, by showing that it can be done for individual quantum gates outside the Clifford group, even for varying noise levels per quantum gate. This is possible at little overhead of quantum resources, but at the expense of a significant classical computational cost. At the heart of our analysis is a representation-theoretic framework that we develop here which is brought into contact with classical estimation techniques based on bootstrapping and matrix pencils. We demonstrate the functioning of the scheme at hand of benchmarking tensor powers of T-gates. Apart from its practical relevance, we expect this insight to be relevant as it highlights the role of assumptions made on unknown noise processes when characterizing quantum gates at high precision.

rate research

Read More

A key requirement for scalable quantum computing is that elementary quantum gates can be implemented with sufficiently low error. One method for determining the error behavior of a gate implementation is to perform process tomography. However, standard process tomography is limited by errors in state preparation, measurement and one-qubit gates. It suffers from inefficient scaling with number of qubits and does not detect adverse error-compounding when gates are composed in long sequences. An additional problem is due to the fact that desirable error probabilities for scalable quantum computing are of the order of 0.0001 or lower. Experimentally proving such low errors is challenging. We describe a randomized benchmarking method that yields estimates of the computationally relevant errors without relying on accurate state preparation and measurement. Since it involves long sequences of randomly chosen gates, it also verifies that error behavior is stable when used in long computations. We implemented randomized benchmarking on trapped atomic ion qubits, establishing a one-qubit error probability per randomized pi/2 pulse of 0.00482(17) in a particular experiment. We expect this error probability to be readily improved with straightforward technical modifications.
We describe a simple randomized benchmarking protocol for quantum information processors and obtain a sequence of models for the observable fidelity decay as a function of a perturbative expansion of the errors. We are able to prove that the protocol provides an efficient and reliable estimate of an average error-rate for a set operations (gates) under a general noise model that allows for both time and gate-dependent errors. We determine the conditions under which this estimate remains valid and illustrate the protocol through numerical examples.
To improve the performance of multi-qubit algorithms on quantum devices it is critical to have methods for characterizing non-local quantum errors such as crosstalk. To address this issue, we propose and test an extension to the analysis of simultaneous randomized benchmarking data -- correlated randomized benchmarking. We fit the decay of correlated polarizations to a composition of fixed-weight depolarizing maps to characterize the locality and weight of crosstalk errors. From these errors we introduce a crosstalk metric which indicates the distance to the closest map with only local errors. We demonstrate this technique experimentally with a four-qubit superconducting device and utilize correlated RB to validate crosstalk reduction when we implement an echo sequence.
Randomized benchmarking (RB) protocols are standard tools for characterizing quantum devices. Prior analyses of RB protocols have not provided a complete method for analyzing realistic data, resulting in a variety of ad-hoc methods. The main confounding factor in rigorously analyzing data from RB protocols is an unknown and noise-dependent distribution of survival probabilities over random sequences. We propose a hierarchical Bayesian method where these survival distributions are modeled as nonparametric Dirichlet process mixtures. Our method infers parameters of interest without additional assumptions about the underlying physical noise process. We show with numerical examples that our method works robustly for both standard and highly pathological error models. Our method also works reliably at low noise levels and with little data because we avoid the asymptotic assumptions of commonly used methods such as least-squares fitting. For example, our method produces a narrow and consistent posterior for the average gate fidelity from ten random sequences per sequence length in the standard RB protocol.
Randomized benchmarking (RB) is a widely used method for estimating the average fidelity of gates implemented on a quantum computing device. The stochastic error of the average gate fidelity estimated by RB depends on the sampling strategy (i.e., how to sample sequences to be run in the protocol). The sampling strategy is determined by a set of configurable parameters (an RB configuration) that includes Clifford lengths (a list of the number of independent Clifford gates in a sequence) and the number of sequences for each Clifford length. The RB configuration is often chosen heuristically and there has been little research on its best configuration. Therefore, we propose a method for fully optimizing an RB configuration so that the confidence interval of the estimated fidelity is minimized while not increasing the total execution time of sequences. By experiments on real devices, we demonstrate the efficacy of the optimization method against heuristic selection in reducing the variance of the estimated fidelity.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا