Do you want to publish a course? Click here

Error-compensation measurements on polarization qubits

405   0   0.0 ( 0 )
 Added by Zhibo Hou
 Publication date 2015
  fields Physics
and research's language is English




Ask ChatGPT about the research

Systematic errors are inevitable in most measurements performed in real life because of imperfect measurement devices. Reducing systematic errors is crucial to ensuring the accuracy and reliability of measurement results. To this end, delicate error-compensation design is often necessary in addition to device calibration to reduce the dependence of the systematic error on the imperfection of the devices. The art of error-compensation design is well appreciated in nuclear magnetic resonance system by using composite pulses. In contrast, there are few works on reducing systematic errors in quantum optical systems. Here we propose an error-compensation design to reduce the systematic error in projective measurements on a polarization qubit. It can reduce the systematic error to the second order of the phase errors of both the half-wave plate (HWP) and the quarter-wave plate (QWP) as well as the angle error of the HWP. This technique is then applied to experiments on quantum state tomography on polarization qubits, leading to a 20-fold reduction in the systematic error. Our study may find applications in high-precision tasks in polarization optics and quantum optics.



rate research

Read More

The Eastin-Knill theorem states that no quantum error correcting code can have a universal set of transversal gates. For self-dual CSS codes that can implement Clifford gates transversally it suffices to provide one additional non-Clifford gate, such as the $T$-gate, to achieve universality. Common methods to implement fault-tolerant $T$-gates like magic state distillation generate a significant hardware overhead that will likely prevent their practical usage in the near-term future. Recently methods have been developed to mitigate the effect of noise in shallow quantum circuits that are not protected by error correction. Error mitigation methods require no additional hardware resources but suffer from a bad asymptotic scaling and apply only to a restricted class of quantum algorithms. In this work, we combine both approaches and show how to implement encoded Clifford+$T$ circuits where Clifford gates are protected from noise by error correction while errors introduced by noisy encoded $T$-gates are mitigated using the quasi-probability method. As a result, Clifford+$T$ circuits with a number of $T$-gates inversely proportional to the physical noise rate can be implemented on small error-corrected devices without magic state distillation. We argue that such circuits can be out of reach for state-of-the-art classical simulation algorithms.
We investigate quantum error correction using continuous parity measurements to correct bit-flip errors with the three-qubit code. Continuous monitoring of errors brings the benefit of a continuous stream of information, which facilitates passive error tracking in real time. It reduces overhead from the standard gate-based approach that periodically entangles and measures additional ancilla qubits. However, the noisy analog signals from continuous parity measurements mandate more complicated signal processing to interpret syndromes accurately. We analyze the performance of several practical filtering methods for continuous error correction and demonstrate that they are viable alternatives to the standard ancilla-based approach. As an optimal filter, we discuss an unnormalized (linear) Bayesian filter, with improved computational efficiency compared to the related Wonham filter introduced by Mabuchi [New J. Phys. 11, 105044 (2009)]. We compare this optimal continuous filter to two practical variations of the simplest periodic boxcar-averaging-and-thresholding filter, targeting real-time hardware implementations with low-latency circuitry. As variations, we introduce a non-Markovian ``half-boxcar filter and a Markovian filter with a second adjustable threshold; these filters eliminate the dominant source of error in the boxcar filter, and compare favorably to the optimal filter. For each filter, we derive analytic results for the decay in average fidelity and verify them with numerical simulations.
Optimization or sampling of arbitrary pairwise Ising models, in a quantum annealing protocol of constrained interaction topology, can be enabled by a minor-embedding procedure. The logical problem of interest is transformed to a physical (device programmable) problem, where one binary variable is represented by a logical qubit consisting of multiple physical qubits. In this paper we discuss tuning of this transformation for the cases of clique, biclique, and cubic lattice problems on the D-Wave 2000Q quantum computer. We demonstrate parameter tuning protocols in spin glasses and channel communication problems, focusing on anneal duration, chain strength, and mapping from the result on physical qubits back to the logical space. Inhomogeneities in effective coupling strength arising from minor-embedding are shown to be mitigated by an efficient reweighting of programmed couplings, accounting for logical qubit topology.
Experimental realization of stabilizer-based quantum error correction (QEC) codes that would yield superior logical qubit performance is one of the formidable task for state-of-the-art quantum processors. A major obstacle towards realizing this goal is the large footprint of QEC codes, even those with a small distance. We propose a circuit based on the minimal distance-3 QEC code, which requires only 5 data qubits and 5 ancilla qubits, connected in a ring with iSWAP gates implemented between neighboring qubits. Using a density-matrix simulation, we show that, thanks to its smaller footprint, the proposed code has a lower logical error rate than Surface-17 for similar physical error rates. We also estimate the performance of a neural network-based error decoder, which can be trained to accommodate the error statistics of a specific quantum processor by training on experimental data.
We provide a systematic way of constructing entanglement-assisted quantum error-correcting codes via graph states in the scenario of preexisting perfectly protected qubits. It turns out that the preexisting entanglement can help beat the quantum Hamming bound and can enhance (not only behave as an assistance) the performance of the quantum error correction. Furthermore we generalize the error models to the case of not-so-perfectly-protected qubits and introduce the quantity infidelity as a figure of merit and show that our code outperforms also the ordinary quantum error-correcting codes.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا