Do you want to publish a course? Click here

Probing Qubit Memory Errors at the Part-per-Million Level

225   0   0.0 ( 0 )
 Added by Martin Sepiol
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Robust qubit memory is essential for quantum computing, both for near-term devices operating without error correction, and for the long-term goal of a fault-tolerant processor. We directly measure the memory error $epsilon_m$ for a $^{43}$Ca$^+$ trapped-ion qubit in the small-error regime and find $epsilon_m<10^{-4}$ for storage times $tlesssim50,mbox{ms}$. This exceeds gate or measurement times by three orders of magnitude. Using randomized benchmarking, at $t=1,mbox{ms}$ we measure $epsilon_m=1.2(7)times10^{-6}$, around ten times smaller than that extrapolated from the $T_{2}^{ast}$ time, and limited by instability of the atomic clock reference used to benchmark the qubit.



rate research

Read More

We report observations of the linear polarisation of a sample of 50 nearby southern bright stars measured to a median sensitivity of $sim$4.4 $times 10^{-6}$. We find larger polarisations and more highly polarised stars than in the previous PlanetPol survey of northern bright stars. This is attributed to a dustier interstellar medium in the mid-plane of the Galaxy, together with a population containing more B-type stars leading to more intrinsically polarised stars, as well as using a wavelength more sensitive to intrinsic polarisation in late-type giants. Significant polarisation had been identified for only six stars in the survey group previously, whereas we are now able to deduce intrinsic polarigenic mechanisms for more than twenty. The four most highly polarised stars in the sample are the four classical Be stars ($alpha$ Eri, $alpha$ Col, $eta$ Cen and $alpha$ Ara). For the three of these objects resolved by interferometry, the position angles are consistent with the orientation of the circumstellar disc determined. We find significant intrinsic polarisation in most B stars in the sample; amongst these are a number of close binaries and an unusual binary debris disk system. However these circumstances do not account for the high polarisations of all the B stars in the sample and other polarigenic mechanisms are explored. Intrinsic polarisation is also apparent in several late type giants which can be attributed to either close, hot circumstellar dust or bright spots in the photosphere of these stars. Aside from a handful of notable debris disk systems, the majority of A to K type stars show polarisation levels consistent with interstellar polarisation.
Topological quantum error correction codes are known to be able to tolerate arbitrary local errors given sufficient qubits. This includes correlated errors involving many local qubits. In this work, we quantify this level of tolerance, numerically studying the effects of many-qubit errors on the performance of the surface code. We find that if increasingly large area errors are at least moderately exponentially suppressed, arbitrarily reliable quantum computation can still be achieved with practical overhead. We furthermore quantify the effect of non-local two-qubit correlated errors, which would be expected in arrays of qubits coupled by a polynomially decaying interaction, and when using many-qubit coupling devices. We surprisingly find that the surface code is very robust to this class of errors, despite a provable lack of a threshold error rate when such errors are present.
In the quantum anomalous Hall effect, quantized Hall resistance and vanishing longitudinal resistivity are predicted to result from the presence of dissipationless, chiral edge states and an insulating 2D bulk, without requiring an external magnetic field. Here, we explore the potential of this effect in magnetic topological insulator thin films for metrological applications. Using a cryogenic current comparator system, we measure quantization of the Hall resistance to within one part per million and longitudinal resistivity under 10 m$Omega$ per square at zero magnetic field. Increasing the current density past a critical value leads to a breakdown of the quantized, low-dissipation state, which we attribute to electron heating in bulk current flow. We further investigate the pre-breakdown regime by measuring transport dependence on temperature, current, and geometry, and find evidence for bulk dissipation, including thermal activation and possible variable-range hopping.
The part-per-million measurement of the positive muon lifetime and determination of the Fermi constant by the MuLan experiment at the Paul Scherrer Institute is reviewed. The experiment used an innovative, time-structured, surface muon beam and a near-4pi, finely-segmented, plastic scintillator positron detector. Two in-vacuum muon stopping targets were used: a ferromagnetic foil with a large internal magnetic field, and a quartz crystal in a moderate external magnetic field. The experiment obtained a muon lifetime 2 196 980.3(2.2) ps (1.0 ppm) and a Fermi constant 1.166 378 7(6) 10^-5 GeV^-2 (0.5 ppm). The thirty-fold improvement in the muon lifetime has proven valuable for precision measurements in nuclear muon capture and the commensurate improvement in the Fermi constant has proven valuable for precision tests of the standard model.
Reducing measurement errors in multi-qubit quantum devices is critical for performing any quantum algorithm. Here we show how to mitigate measurement errors by a classical post-processing of the measured outcomes. Our techniques apply to any experiment where measurement outcomes are used for computing expected values of observables. Two error mitigation schemes are presented based on tensor product and correlated Markovian noise models. Error rates parameterizing these noise models can be extracted from the measurement calibration data using a simple formula. Error mitigation is achieved by applying the inverse noise matrix to a probability vector that represents the outcomes of a noisy measurement. The error mitigation overhead, including the the number of measurements and the cost of the classical post-processing, is exponential in $epsilon n$, where $epsilon$ is the maximum error rate and $n$ is the number of qubits. We report experimental demonstration of our error mitigation methods on IBM Quantum devices using stabilizer measurements for graph states with $nle 12$ qubits and entangled 20-qubit states generated by low-depth random Clifford circuits.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا