No Arabic abstract
Twirling is a technique widely used for converting arbitrary noise channels into Pauli channels in error threshold estimations of quantum error correction codes. It is vitally useful both in real experiments and in classical quantum simulations. Minimising the size of the twirling gate set increases the efficiency of simulations and in experiments it might reduce both the number of runs required and the circuit depth (and hence the error burden). Conventional twirling uses the full set of Pauli gates as the set of twirling gates. This article provides a theoretical background for Pauli twirling and a way to construct a twirling gate set with a number of members comparable to the size of the Pauli basis of the given error channel, which is usually much smaller than the full set of Pauli gates. We also show that twirling is equivalent to stabiliser measurements with discarded measurement results, which enables us to further reduce the size of the twirling gate set.
One of the main problems in quantum information systems is the presence of errors due to noise, and for this reason quantum error-correcting codes (QECCs) play a key role. While most of the known codes are designed for correcting generic errors, i.e., errors represented by arbitrary combinations of Pauli X , Y and Z operators, in this paper we investigate the design of stabilizer QECC able to correct a given number eg of generic Pauli errors, plus eZ Pauli errors of a specified type, e.g., Z errors. These codes can be of interest when the quantum channel is asymmetric in that some types of error occur more frequently than others. We first derive a generalized quantum Hamming bound for such codes, then propose a design methodology based on syndrome assignments. For example, we found a [[9,1]] quantum error-correcting code able to correct up to one generic qubit error plus one Z error in arbitrary positions. This, according to the generalized quantum Hamming bound, is the shortest code with the specified error correction capability. Finally, we evaluate analytically the performance of the new codes over asymmetric channels.
Motivated by estimation of quantum noise models, we study the problem of learning a Pauli channel, or more generally the Pauli error rates of an arbitrary channel. By employing a novel reduction to the Population Recovery problem, we give an extremely simple algorithm that learns the Pauli error rates of an $n$-qubit channel to precision $epsilon$ in $ell_infty$ using just $O(1/epsilon^2) log(n/epsilon)$ applications of the channel. This is optimal up to the logarithmic factors. Our algorithm uses only unentangled state preparation and measurements, and the post-measurement classical runtime is just an $O(1/epsilon)$ factor larger than the measurement data size. It is also impervious to a limited model of measurement noise where heralded measurement failures occur independently with probability $le 1/4$. We then consider the case where the noise channel is close to the identity, meaning that the no-error outcome occurs with probability $1-eta$. In the regime of small $eta$ we extend our algorithm to achieve multiplicative precision $1 pm epsilon$ (i.e., additive precision $epsilon eta$) using just $Obigl(frac{1}{epsilon^2 eta}bigr) log(n/epsilon)$ applications of the channel.
Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015]
Recent advances in hyperbolic metamaterials have spurred many breakthroughs in the field of manipulating light propagation. However, the unusual electromagnetic properties also put extremely high demands on its compositional materials. Limited by the finite relative permittivity of the natural materials, the effective permittivity of the constructed hyperbolic metamaterials is also confined to a narrow range. Here, based on the proposed concept of structure-induced spoof surface plasmon, we prove that arbitrary materials can be selected to construct the hyperbolic metamaterials with independent relative effective permittivity components. Besides, the theoretical achievable ranges of the relative effective permittivity components are unlimited. As proofs of the method, three novel hyperbolic metamaterials are designed with their functionalities validated numerically and experimentally by specified directional propagation. To further illustrate the superiority of the method, an all-metal low-loss hyperbolic metamaterial filled with air is proposed and demonstrated. The proposed methodology effectively reduces the design requirement for hyperbolic metamaterials and provides new ideas for the scenarios where large permittivity coverage is needed such as microwave and terahertz focus, super-resolution imaging, electromagnetic cloaking, and so on.
We analyze the quantum evolution represented by a time-dependent family of generalized Pauli channels. This evolution is provided by the random decoherence channels with respect to the maximal number of mutually unbiased bases. We derive the necessary and sufficient conditions for the vanishing back-flow of information.