No Arabic abstract
Karl Popper had proposed an experiment to test the standard interpretation of quantum mechanics. The proposal survived for many year in the midst of no clear consensus on what results it would yield. The experiment was realized by Kim and Shih in 1999, and the apparently surprising result led to lot of debate. We review Poppers proposal and its realization in the light of current era when entanglement has been well studied, both theoretically and experimentally. We show that the ghost-diffraction experiment, carried out in a different context, conclusively resolves the controversy surrounding Poppers experiment.
In an effort to challenge the Copenhagen interpretation of quantum mechanics, Karl Popper proposed an experiment involving spatially separated entangled particles. In this experiment, one of the particles passes through a very narrow slit, and thereby its position becomes well-defined. This particle therefore diffracts into a large divergence angle; this effect can be understood as a consequence of the Heisenberg uncertainty principle. Popper further argued that its entangled partner would become comparably localized in position, and that, according to his understanding of the Copenhagen interpretation of quantum mechanics, the qo{mere knowledge} of the position of this particle would cause it also to diffract into a large divergence angle. Popper recognized that such behaviour could violate the principle of causality in that the slit could be removed and the partner particle would be expected to respond instantaneously. Popper thus concluded that it was most likely the case that in an actual experiment the partner photon would not undergo increased diffractive spreading and thus that the Copenhagen interpretation is incorrect. Here, we report and analyze the results of an implementation of Poppers proposal. We find that the partner beam does not undergo increased diffractive spreading. Our work resolves many of the open questions involving Poppers proposal, and it provides further insight into the nature of entanglement and its relation to the uncertainty principle of correlated particles.
Existing neural ranking models follow the text matching paradigm, where document-to-query relevance is estimated through predicting the matching score. Drawing from the rich literature of classical generative retrieval models, we introduce and formalize the paradigm of deep generative retrieval models defined via the cumulative probabilities of generating query terms. This paradigm offers a grounded probabilistic view on relevance estimation while still enabling the use of modern neural architectures. In contrast to the matching paradigm, the probabilistic nature of generative rankers readily offers a fine-grained measure of uncertainty. We adopt several current neural generative models in our framework and introduce a novel generative ranker (T-PGN), which combines the encoding capacity of Transformers with the Pointer Generator Network model. We conduct an extensive set of evaluation experiments on passage retrieval, leveraging the MS MARCO Passage Re-ranking and TREC Deep Learning 2019 Passage Re-ranking collections. Our results show the significantly higher performance of the T-PGN model when compared with other generative models. Lastly, we demonstrate that exploiting the uncertainty information of deep generative rankers opens new perspectives to query/collection understanding, and significantly improves the cut-off prediction task.
This is a preliminary version of a book in progress on the theory of quantum communication. We adopt an information-theoretic perspective throughout and give a comprehensive account of fundamental results in quantum communication theory from the past decade (and earlier), with an emphasis on the modern one-shot-to-asymptotic approach that underlies much of todays state-of-the-art research in this field. In Part I, we cover mathematical preliminaries and provide a detailed study of quantum mechanics from an information-theoretic perspective. We also provide an extensive and thorough review of the quantum entropy zoo, and we devote an entire chapter to the study of entanglement measures. Equipped with these essential tools, in Part II we study classical communication (with and without entanglement assistance), entanglement distillation, quantum communication, secret key distillation, and private communication. In Part III, we cover the latest developments in feedback-assisted communication tasks, such as quantum and classical feedback-assisted communication, LOCC-assisted quantum communication, and secret key agreement.
The wave-particle duality of light introduces two fundamental problems to imaging, namely, the diffraction limit and the photon shot noise. Quantum information theory can tackle them both in one holistic formalism: model the light as a quantum object, consider any quantum measurement, and pick the one that gives the best statistics. While Helstrom pioneered the theory half a century ago and first applied it to incoherent imaging, it was not until recently that the approach offered a genuine surprise on the age-old topic by predicting a new class of superior imaging methods. For the resolution of two sub-Rayleigh sources, the new methods have been shown theoretically and experimentally to outperform direct imaging and approach the true quantum limits. Recent efforts to generalize the theory for an arbitrary number of sources suggest that, despite the existence of harsh quantum limits, the quantum-inspired methods can still offer significant improvements over direct imaging for subdiffraction objects, potentially benefiting many applications in astronomy as well as fluorescence microscopy.
We propose Quantum Brain Networks (QBraiNs) as a new interdisciplinary field integrating knowledge and methods from neurotechnology, artificial intelligence, and quantum computing. The objective is to develop an enhanced connectivity between the human brain and quantum computers for a variety of disruptive applications. We foresee the emergence of hybrid classical-quantum networks of wetware and hardware nodes, mediated by machine learning techniques and brain-machine interfaces. QBraiNs will harness and transform in unprecedented ways arts, science, technologies, and entrepreneurship, in particular activities related to medicine, Internet of humans, intelligent devices, sensorial experience, gaming, Internet of things, crypto trading, and business.