No Arabic abstract
Quantum tomography makes it possible to obtain comprehensive information about certain logical elements of a quantum computer. In this regard, it is a promising tool for debugging quantum computers. The practical application of tomography, however, is still limited by systematic measurement errors. Their main source are errors in the quantum state preparation and measurement procedures. In this work, we investigate the possibility of suppressing these errors in the case of ion-based qudits. First, we will show that one can construct a quantum measurement protocol that contains no more than a single quantum operation in each measurement circuit. Such a protocol is more robust to errors than the measurements in mutually unbiased bases, where the number of operations increases in proportion to the square of the qudit dimension. After that, we will demonstrate the possibility of determining and accounting for the state initialization and readout errors. Together, the measures described can significantly improve the accuracy of quantum tomography of real ion-based qudits.
Quantum state tomography (QST) is an essential tool for characterizing an unknown quantum state. Recently, QST has been performed for entangled qudits based on orbital angular momentum, time-energy uncertainty, and frequency bins. Here, we propose a QST for time-bin qudits, with which the number of measurement settings scales linearly with dimension $d$. Using the proposed scheme, we performed QST for a four-dimensional time-bin maximally entangled state with 16 measurement settings. We successfully reconstructed the density matrix of the entangled qudits, with which the average fidelity of the state was calculated to be 0.950.
Generalizations of the classic Bell inequality to higher dimensional quantum systems known as qudits are reputed to exhibit a higher degree of robustness to noise, but such claims are based on one particular noise model. We analyze the violation of the Collins-Gisin-Linden-Massar-Popescu inequality subject to more realistic noise sources and their scaling with dimension. This analysis is inspired by potential Bell inequality experiments with superconducting resonator-based qudits. We find that the robustness of the inequality to noise generally decreases with increasing qudit dimension.
We consider realistic measurement systems, where measurements are accompanied by decoherence processes. The aim of this work is the construction of methods and algorithms for precise quantum measurements with fidelity close to the fundamental limit. In the present work the notions of ideal and non-ideal quantum measurements are strictly formalized. It is shown that non-ideal quantum measurements could be represented as a mixture of ideal measurements. Based on root approach the quantum state reconstruction method is developed. Informational accuracy theory of non-ideal quantum measurements is proposed. The monitoring of the amount of information about the quantum state parameters is examined, including the analysis of the information degradation under the noise influence. The study of achievable fidelity in non-ideal quantum measurements is performed. The results of simulation of fidelity characteristics of a wide class of quantum protocols based on polyhedrons geometry with high level of symmetry are presented. The impact of different decoherence mechanisms, including qubit amplitude and phase relaxation, bit-flip and phase-flip, is considered.
In this paper, we discuss the minimal number of observables, where expectation values at some time instant determine the trajectory of a d-level quantum system (qudit) governed by the Gaussian semigroup. We assume that the macroscopic information about the system in question is given by the mean values of n selfadjoint operators $Q_1,...,Q_n$ at some time instants $t_1<t_2<...<t_r$, where $n<d^2-1$ and $rleq {rm deg} mu(lambda,bBBL)$. Here $mu(lambda,bBBL)$ stands for the minimal polynomial of the generator of the Gaussian flow.
Trapped ions (TI) are a leading candidate for building Noisy Intermediate-Scale Quantum (NISQ) hardware. TI qubits have fundamental advantages over other technologies such as superconducting qubits, including high qubit quality, coherence and connectivity. However, current TI systems are small in size, with 5-20 qubits and typically use a single trap architecture which has fundamental scalability limitations. To progress towards the next major milestone of 50-100 qubits, a modular architecture termed the Quantum Charge Coupled Device (QCCD) has been proposed. In a QCCD-based TI device, small traps are connected through ion shuttling. While the basic hardware components for such devices have been demonstrated, building a 50-100 qubit system is challenging because of a wide range of design possibilities for trap sizing, communication topology and gate implementations and the need to match diverse application resource requirements. Towards realizing QCCD systems with 50-100 qubits, we perform an extensive architectural study evaluating the key design choices of trap sizing, communication topology and operation implementation methods. We built a design toolflow which takes a QCCD architectures parameters as input, along with a set of applications and realistic hardware performance models. Our toolflow maps the applications onto the target device and simulates their execution to compute metrics such as application run time, reliability and device noise rates. Using six applications and several hardware design points, we show that trap sizing and communication topology choices can impact application reliability by up to three orders of magnitude. Microarchitectural gate implementation choices influence reliability by another order of magnitude. From these studies, we provide concrete recommendations to tune these choices to achieve highly reliable and performant application executions.