No Arabic abstract
Non-local correlations that obey the no-signalling principle contain intrinsic randomness. In particular, for a specific Bell experiment, one can derive relations between the amount of randomness produced, as quantified by the min-entropy of the output data, and its associated violation of a Bell inequality. In practice, due to finite sampling, certifying randomness requires the development of statistical tools to lower-bound the min-entropy of the data as a function of the estimated Bell violation. The quality of such bounds relies on the choice of certificate, i.e., the Bell inequality whose violation is estimated. In this work, we propose a method for choosing efficiently such a certificate. It requires sacrificing a part of the output data in order to estimate the underlying correlations. Regularising this estimate then allows one to find a Bell inequality that is well suited for certifying practical randomness from these specific correlations. We then study the effects of various parameters on the obtained min-entropy bound and explain how to tune them in a favourable way. Lastly, we carry out several numerical simulations of a Bell experiment to show the efficiency of our method: we nearly always obtain higher min-entropy rates than when we use a pre-established Bell inequality, namely the Clauser-Horne-Shimony-Holt inequality.
Quantum entanglement between two or more bipartite entities is a core concept in quantum information areas limited to microscopic regimes directly governed by Heisenberg uncertainty principle via quantum superposition, resulting in nondeterministic and probabilistic quantum features. Such quantum features cannot be generated by classical means. Here, a pure classical method of on-demand entangled light-pair generation is presented in a macroscopic regime via basis randomness. This conflicting idea of conventional quantum mechanics invokes a fundamental question about both classicality and quantumness, where superposition is key to its resolution.
In a previous paper, we introduced a semi-device-independent scheme consisting of an untrusted source sending quantum states to an untrusted measuring device, with the sole assumption that the average energy of the states emitted by the source is bounded. Given this energy constraint, we showed that certain correlations between the source and the measuring device can only occur if the outcomes of the measurement are non-deterministic, i.e., these correlations certify the presence of randomness. In the present paper, we go further and show how to quantify the randomness as a function of the correlations and prove the soundness of a QRNG protocol exploiting this relation. For this purpose, we introduce (1) a semidefinite characterization of the set of quantum correlations, (2) an algorithm to lower-bound the Shannon entropy as a function of the correlations and (3) a proof of soundness using finite trials compatible with our energy assumption.
We report on an optical setup generating more than one bit of randomness from one entangled bit (i.e. a maximally entangled state of two-qubits). The amount of randomness is certified through the observation of Bell non-local correlations. To attain this result we implemented a high-purity entanglement source and a non-projective three-outcome measurement. Our implementation achieves a gain of 27$%$ of randomness as compared with the standard methods using projective measurements. Additionally we estimate the amount of randomness certified in a one-sided device independent scenario, through the observation of EPR steering. Our results prove that non-projective quantum measurements allows extending the limits for nonlocality-based certified randomness generation using current technology.
In quantum cryptography, device-independent (DI) protocols can be certified secure without requiring assumptions about the inner workings of the devices used to perform the protocol. In order to display nonlocality, which is an essential feature in DI protocols, the device must consist of at least two separate components sharing entanglement. This raises a fundamental question: how much entanglement is needed to run such DI protocols? We present a two-device protocol for DI random number generation (DIRNG) which produces approximately $n$ bits of randomness starting from $n$ pairs of arbitrarily weakly entangled qubits. We also consider a variant of the protocol where $m$ singlet states are diluted into $n$ partially entangled states before performing the first protocol, and show that the number $m$ of singlet states need only scale sublinearly with the number $n$ of random bits produced. Operationally, this leads to a DIRNG protocol between distant laboratories that requires only a sublinear amount of quantum communication to prepare the devices.
In a measurement-device-independent or quantum-refereed protocol, a referee can verify whether two parties share entanglement or Einstein-Podolsky-Rosen (EPR) steering without the need to trust either of the parties or their devices. The need for trusting a party is substituted by a quantum channel between the referee and that party, through which the referee encodes the measurements to be performed on that partys subsystem in a set of nonorthogonal quantum states. In this Letter, an EPR-steering inequality is adapted as a quantum-refereed EPR-steering witness, and the trust-free experimental verification of higher dimensional quantum steering is reported via preparing a class of entangled photonic qutrits. Further, with two measurement settings, we extract $1.106pm0.023$ bits of private randomness per every photon pair from our observed data, which surpasses the one-bit limit for projective measurements performed on qubit systems. Our results advance research on quantum information processing tasks beyond qubits.