No Arabic abstract
We assess the quality of a source of allegedly pure two-qubit states using both standard tomography and methods inspired by device-independent self-testing. Even when the detection and locality loopholes are open, the latter methods can dispense with modelling of the system and the measurements. However, due to finite sample fluctuations, the estimated probability distribution usually does not satisfy the no-signaling conditions exactly. We implement data analysis that is robust against these fluctuations. We demonstrate a high ratio $f_s/f_tapprox 0.988$ between the fidelity estimated from self-testing and that estimated from full tomography, proving high performance of self-testing methods.
Quantum self-testing is a device-independent way to certify quantum states and measurements using only the input-output statistics, with minimal assumptions about the quantum devices. Due to the high demand on tolerable noise, however, experimental self-testing was limited to two-photon systems. Here, we demonstrate the first robust self-testing for multi-particle quantum entanglement. We prepare two examples of four-photon graph states, the Greenberger-Horne-Zeilinger (GHZ) states with a fidelity of 0.957(2) and the linear cluster states with a fidelity of 0.945(2). Based on the observed input-output statistics, we certify the genuine four-photon entanglement and further estimate their qualities with respect to realistic noise in a device-independent manner.
High-dimensional encoding of quantum information provides a promising method of transcending current limitations in quantum communication. One of the central challenges in the pursuit of such an approach is the certification of high-dimensional entanglement. In particular, it is desirable to do so without resorting to inefficient full state tomography. Here, we show how carefully constructed measurements in two bases (one of which is not orthonormal) can be used to faithfully and efficiently certify bipartite high-dimensional states and their entanglement for any physical platform. To showcase the practicality of this approach under realistic conditions, we put it to the test for photons entangled in their orbital angular momentum. In our experimental setup, we are able to verify 9-dimensional entanglement for a pair of photons on a 11-dimensional subspace each, at present the highest amount certified without any assumptions on the state.
We report an experimental realization of an adaptive quantum state tomography protocol. Our method takes advantage of a Bayesian approach to statistical inference and is naturally tailored for adaptive strategies. For pure states we observe close to 1/N scaling of infidelity with overall number of registered events, while best non-adaptive protocols allow for $1/sqrt{N}$ scaling only. Experiments are performed for polarization qubits, but the approach is readily adapted to any dimension.
Self-testing is a method of quantum state and measurement estimation that does not rely on assumptions about the inner working of the used devices. Its experimental realization has been limited to sources producing single quantum states so far. In this work, we experimentally implement two significant building blocks of a quantum network involving two independent sources, i.e. a parallel configuration in which two parties share two copies of a state, and a tripartite configuration where a central node shares two independent states with peripheral nodes. Then, by extending previous self-testing techniques we provide device-independent lower bounds on the fidelity between the generated states and an ideal state made by the tensor product of two maximally entangled two-qubit states. Given its scalability and versatility, this technique can find application in the certification of larger networks of different topologies, for quantum communication and cryptography tasks and randomness generation protocols.
Previous theoretical works showed that all pure two-qubit entangled states can generate one bit of local randomness and can be self-tested through the violation of proper Bell inequalities. We report an experiment in which nearly pure partially entangled states of photonic qubits are produced to investigate these tasks in a practical scenario. We show that small deviations from the ideal situation make low entangled states impractical to self-testing and randomness generation using the available techniques. Our results show that in practice lower entanglement implies lower randomness generation, recovering the intuition that maximally entangled states are better candidates for deviceindependent quantum information processing.