Do you want to publish a course? Click here

Quantifying the effects of local many-qubit errors and non-local two-qubit errors on the surface code

129   0   0.0 ( 0 )
 Added by Austin Fowler
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

Topological quantum error correction codes are known to be able to tolerate arbitrary local errors given sufficient qubits. This includes correlated errors involving many local qubits. In this work, we quantify this level of tolerance, numerically studying the effects of many-qubit errors on the performance of the surface code. We find that if increasingly large area errors are at least moderately exponentially suppressed, arbitrarily reliable quantum computation can still be achieved with practical overhead. We furthermore quantify the effect of non-local two-qubit correlated errors, which would be expected in arrays of qubits coupled by a polynomially decaying interaction, and when using many-qubit coupling devices. We surprisingly find that the surface code is very robust to this class of errors, despite a provable lack of a threshold error rate when such errors are present.



rate research

Read More

Scalable quantum information processing requires the ability to tune multi-qubit interactions. This makes the precise manipulation of quantum states particularly difficult for multi-qubit interactions because tunability unavoidably introduces sensitivity to fluctuations in the tuned parameters, leading to erroneous multi-qubit gate operations. The performance of quantum algorithms may be severely compromised by coherent multi-qubit errors. It is therefore imperative to understand how these fluctuations affect multi-qubit interactions and, more importantly, to mitigate their influence. In this study, we demonstrate how to implement dynamical-decoupling techniques to suppress the two-qubit analogue of the dephasing on a superconducting quantum device featuring a compact tunable coupler, a trending technology that enables the fast manipulation of qubit--qubit interactions. The pure-dephasing time shows an up to ~14 times enhancement on average when using robust sequences. The results are in good agreement with the noise generated from room-temperature circuits. Our study further reveals the decohering processes associated with tunable couplers and establishes a framework to develop gates and sequences robust against two-qubit errors.
436 - Valerio Scarani 2008
The local and non-local contents of non-local probability distributions are studied using the approach of Elitzur, Popescu and Rohrlich [Phys. Lett. A textbf{162}, 25 (1992)]. This work focuses on distributions that can be obtained by single-copy von Neumann measurements on bipartite quantum systems. For pure two-qubit states Psi(theta)=cos(theta)|00>+sin(theta)|11>, with cos(theta)>=sin(theta), the local content of the corresponding probability distribution is found to lie between 1-sin(2*theta) and cos(2*theta). For the family Psi(gamma)= (|00>+|11>+gamma*|22>)/sqrt(2+gamma^2) of two-qutrit states, non-zero local content is found for gamma>2.
Reducing measurement errors in multi-qubit quantum devices is critical for performing any quantum algorithm. Here we show how to mitigate measurement errors by a classical post-processing of the measured outcomes. Our techniques apply to any experiment where measurement outcomes are used for computing expected values of observables. Two error mitigation schemes are presented based on tensor product and correlated Markovian noise models. Error rates parameterizing these noise models can be extracted from the measurement calibration data using a simple formula. Error mitigation is achieved by applying the inverse noise matrix to a probability vector that represents the outcomes of a noisy measurement. The error mitigation overhead, including the the number of measurements and the cost of the classical post-processing, is exponential in $epsilon n$, where $epsilon$ is the maximum error rate and $n$ is the number of qubits. We report experimental demonstration of our error mitigation methods on IBM Quantum devices using stabilizer measurements for graph states with $nle 12$ qubits and entangled 20-qubit states generated by low-depth random Clifford circuits.
We study the local unitary equivalence for two and three-qubit mixed states by investigating the invariants under local unitary transformations. For two-qubit system, we prove that the determination of the local unitary equivalence of 2-qubits states only needs 14 or less invariants for arbitrary two-qubit states. Using the same method, we construct invariants for three-qubit mixed states. We prove that these invariants are sufficient to guarantee the LU equivalence of certain kind of three-qubit states. Also, we make a comparison with earlier works.
Robust qubit memory is essential for quantum computing, both for near-term devices operating without error correction, and for the long-term goal of a fault-tolerant processor. We directly measure the memory error $epsilon_m$ for a $^{43}$Ca$^+$ trapped-ion qubit in the small-error regime and find $epsilon_m<10^{-4}$ for storage times $tlesssim50,mbox{ms}$. This exceeds gate or measurement times by three orders of magnitude. Using randomized benchmarking, at $t=1,mbox{ms}$ we measure $epsilon_m=1.2(7)times10^{-6}$, around ten times smaller than that extrapolated from the $T_{2}^{ast}$ time, and limited by instability of the atomic clock reference used to benchmark the qubit.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا