ترغب بنشر مسار تعليمي؟ اضغط هنا

A Quantum Enigma Machine: Experimentally Demonstrating Quantum Data Locking

300   0   0.0 ( 0 )
 نشر من قبل Daniel Lum
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Claude Shannon proved in 1949 that information-theoretic-secure encryption is possible if the encryption key is used only once, is random, and is at least as long as the message itself. Notwithstanding, when information is encoded in a quantum system, the phenomenon of quantum data locking allows one to encrypt a message with a shorter key and still provide information-theoretic security. We present one of the first feasible experimental demonstrations of quantum data locking for direct communication and propose a scheme for a quantum enigma machine that encrypts 6 bits per photon (containing messages, new encryption keys, and forward error correction bits) with less than 6 bits per photon of encryption key while remaining information-theoretically secure.

قيم البحث

اقرأ أيضاً

62 - Yang Liu , Zhu Cao , Cheng Wu 2016
Classical correlation can be locked via quantum means--quantum data locking. With a short secret key, one can lock an exponentially large amount of information, in order to make it inaccessible to unauthorized users without the key. Quantum data lock ing presents a resource-efficient alternative to one-time pad encryption which requires a key no shorter than the message. We report experimental demonstrations of quantum data locking scheme originally proposed by DiVincenzo et al. [Phys. Rev. Lett. 92, 067902 (2004)] and a loss-tolerant scheme developed by Fawzi, Hayde, and Sen [J. ACM. 60, 44 (2013)]. We observe that the unlocked amount of information is larger than the key size in both experiments, exhibiting strong violation of the incremental proportionality property of classical information theory. As an application example, we show the successful transmission of a photo over a lossy channel with quantum data (un)locking and error correction.
Detecting a change point is a crucial task in statistics that has been recently extended to the quantum realm. A source state generator that emits a series of single photons in a default state suffers an alteration at some point and starts to emit ph otons in a mutated state. The problem consists in identifying the point where the change took place. In this work, we consider a learning agent that applies Bayesian inference on experimental data to solve this problem. This learning machine adjusts the measurement over each photon according to the past experimental results finds the change position in an online fashion. Our results show that the local-detection success probability can be largely improved by using such a machine learning technique. This protocol provides a tool for improvement in many applications where a sequence of identical quantum states is required.
Engineering apparatus that harness quantum theory offers practical advantages over current technology. A fundamentally more powerful prospect is the long-standing prediction that such quantum technologies could out-perform any future iteration of the ir classical counterparts, no matter how well the attributes of those classical strategies can be improved. Here, we experimentally demonstrate such an instance of textit{absolute} advantage per photon probe in the precision of optical direct absorption measurement. We use correlated intensity measurements of spontaneous parametric downconversion using a commercially available air-cooled CCD, a new estimator for data analysis and a high heralding efficiency photon-pair source. We show this enables improvement in the precision of measurement, per photon probe, beyond what is achievable with an ideal coherent state (a perfect laser) detected with $100%$ efficient and noiseless detection. We see this absolute improvement for up to $50%$ absorption, with a maximum observed factor of improvement of 1.46. This equates to around $32%$ reduction in the total number of photons traversing an optical sample, compared to any future direct optical absorption measurement using classical light.
Many experiments in the field of quantum foundations seek to adjudicate between quantum theory and speculative alternatives to it. This requires one to analyze the experimental data in a manner that does not presume the correctness of the quantum for malism. The mathematical framework of generalized probabilistic theories (GPTs) provides a means of doing so. We present a scheme for determining which GPTs are consistent with a given set of experimental data. It proceeds by performing tomography on the preparations and measurements in a self-consistent manner, i.e., without presuming a prior characterization of either. We illustrate the scheme by analyzing experimental data for a large set of preparations and measurements on the polarization degree of freedom of a single photon. We find that the smallest and largest GPT state spaces consistent with our data are a pair of polytopes, each approximating the shape of the Bloch Sphere and having a volume ratio of $0.977 pm 0.001$, which provides a quantitative bound on the scope for deviations from quantum theory. We also demonstrate how our scheme can be used to bound the extent to which nature might be more nonlocal than quantum theory predicts, as well as the extent to which it might be more or less contextual. Specifically, we find that the maximal violation of the CHSH inequality can be at most $1.3% pm 0.1$ greater than the quantum prediction, and the maximal violation of a particular inequality for universal noncontextuality can not differ from the quantum prediction by more than this factor on either side. The most significant loophole in this sort of analysis is that the set of preparations and measurements one implements might fail to be tomographically complete for the system of interest.
Fundamental questions in chemistry and physics may never be answered due to the exponential complexity of the underlying quantum phenomena. A desire to overcome this challenge has sparked a new industry of quantum technologies with the promise that e ngineered quantum systems can address these hard problems. A key step towards demonstrating such a system will be performing a computation beyond the capabilities of any classical computer, achieving so-called quantum supremacy. Here, using 9 superconducting qubits, we demonstrate an immediate path towards quantum supremacy. By individually tuning the qubit parameters, we are able to generate thousands of unique Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert-space. As the number of qubits in the algorithm is varied, the system continues to explore the exponentially growing number of states. Combining these large datasets with techniques from machine learning allows us to construct a model which accurately predicts the measured probabilities. We demonstrate an application of these algorithms by systematically increasing the disorder and observing a transition from delocalized states to localized states. By extending these results to a system of 50 qubits, we hope to address scientific questions that are beyond the capabilities of any classical computer.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا