No Arabic abstract
The trade-off between the information gain and the state disturbance is derived for quantum operations on a single qubit prepared in a uniformly distributed pure state. The derivation is valid for a class of measures quantifying the state disturbance and the information gain which satisfy certain invariance conditions. This class includes in particular the Shannon entropy versus the operation fidelity. The central role in the derivation is played by efficient quantum operations, which leave the system in a pure output state for any measurement outcome. It is pointed out that the optimality of efficient quantum operations among those inducing a given operator-valued measure is related to Davies characterization of convex invariant functions on hermitian operators.
We propose and experimentally demonstrate an optimal non-unity gain Gaussian scheme for partial measurement of an unknown coherent state that causes minimal disturbance of the state. The information gain and the state disturbance are quantified by the noise added to the measurement outcomes and to the output state, respectively. We derive the optimal trade-off relation between the two noises and we show that the trade-off is saturated by non-unity gain teleportation. Optimal partial measurement is demonstrated experimentally using a linear optics scheme with feed-forward.
Conventionally, unknown quantum states are characterized using quantum-state tomography based on strong or weak measurements carried out on an ensemble of identically prepared systems. By contrast, the use of protective measurements offers the possibility of determining quantum states from a series of weak, long measurements performed on a single system. Because the fidelity of a protectively measured quantum state is determined by the amount of state disturbance incurred during each protective measurement, it is crucial that the initial quantum state of the system is disturbed as little as possible. Here we show how to systematically minimize the state disturbance in the course of a protective measurement, thus enabling the maximization of the fidelity of the quantum-state measurement. Our approach is based on a careful tuning of the time dependence of the measurement interaction and is shown to be dramatically more effective in reducing the state disturbance than the previously considered strategy of weakening the measurement strength and increasing the measurement time. We describe a method for designing the measurement interaction such that the state disturbance exhibits polynomial decay to arbitrary order in the inverse measurement time $1/T$. We also show how one can achieve even faster, subexponential decay, and we find that it represents the smallest possible state disturbance in a protective measurement. In this way, our results show how to optimally measure the state of a single quantum system using protective measurements.
The uncertainty principle states that a measurement inevitably disturbs the system, while it is often supposed that a quantum system is not disturbed without state change. Korzekwa, Jennings, and Rudolph [Phys. Rev. A 89, 052108 (2014)] pointed out a conflict between those two views, and concluded that state-dependent formulations of error-disturbance relations are untenable. Here, we reconcile the conflict by showing that a quantum system is disturbed without state change, in favor of the recently obtained universally valid state-dependent error-disturbance relations.
We present an example of quantum process tomography performed on a single solid state qubit. The qubit used is two energy levels of the triplet state in the Nitrogen-Vacancy defect in Diamond. Quantum process tomography is applied to a qubit which has been allowed to decohere for three different time periods. In each case the process is found in terms of the $chi$ matrix representation and the affine map representation. The discrepancy between experimentally estimated process and the closest physically valid process is noted.
Being able to quantify the level of coherent control in a proposed device implementing a quantum information processor (QIP) is an important task for both comparing different devices and assessing a devices prospects with regards to achieving fault-tolerant quantum control. We implement in a liquid-state nuclear magnetic resonance QIP the randomized benchmarking protocol presented by Knill et al (PRA 77: 012307 (2008)). We report an error per randomized $frac{pi}{2}$ pulse of $1.3 pm 0.1 times 10^{-4}$ with a single qubit QIP and show an experimentally relevant error model where the randomized benchmarking gives a signature fidelity decay which is not possible to interpret as a single error per gate. We explore and experimentally investigate multi-qubit extensions of this protocol and report an average error rate for one and two qubit gates of $4.7 pm 0.3 times 10^{-3}$ for a three qubit QIP. We estimate that these error rates are still not decoherence limited and thus can be improved with modifications to the control hardware and software.