No Arabic abstract
The concept of realism in quantum mechanics means that results of measurement are caused by physical variables, hidden or observable. Local hidden variables were proved unable to explain results of measurements on entangled particles tested far away from one another. Then, some physicists embraced the idea of nonlocal hidden variables. The present article proves that this idea is problematic, that it runs into an impasse vis-`a-vis the special relativity.
Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our findings show that the generalization of modular models is much more systematic and that it is highly sensitive to the module layout, i.e. to how exactly the modules are connected. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We find that end-to-end methods from prior work often learn inappropriate layouts or parametrizations that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.
Recent developments in practical quantum engineering and control techniques have allowed significant developments for experimental studies of open quantum systems and decoherence engineering. Indeed, it has become possible to test experimentally various theoretical, mathematical, and physical concepts related to non-Markovian quantum dynamics. This includes experimental characterization and quantification of non-Markovian memory effects and proof-of-principle demonstrations how to use them for certain quantum communication and information tasks. We describe here recent experimental advances for open system studies, focussing in particular to non-Markovian dynamics including the applications of memory effects, and discuss the possibilities for ultimate control of decoherence and open system dynamics.
Recent Cerenkov observations of the two BL Lac objects PKS 2155-304 and Mkn 501 revealed TeV flux variability by a factor ~2 in just 3-5 minutes. Even accounting for the effects of relativistic beaming, such short timescales are challenging simple and conventional emitting models, and call for alternative ideas. We explore the possibility that extremely fast variable emission might be produced by particles streaming at ultra-relativistic speeds along magnetic field lines and inverse Compton scattering any radiation field already present. This would produce extremely collimated beams of TeV photons. While the probability for the line of sight to be within such a narrow cone of emission would be negligibly small, one would expect that the process is not confined to a single site, but can take place in many very localised regions, along almost straight magnetic lines. A possible astrophysical setting realising these conditions is magneto-centrifugal acceleration of beams of particles. In this scenario, the variability timescale would not be related to the physical dimension of the emitting volume, but might be determined by either the typical duration of the process responsible for the production of these high energy particle beams or by the coherence length of the magnetic field. It is predicted that even faster TeV variability - with no X-ray counterpart - should be observed by the foreseen more sensitive Cerenkov telescopes.
This paper answers Bells question: What does quantum information refer to? It is about quantum properties represented by subspaces of the quantum Hilbert space, or their projectors, to which standard (Kolmogorov) probabilities can be assigned by using a projective decomposition of the identity (PDI or framework) as a quantum sample space. The single framework rule of consistent histories prevents paradoxes or contradictions. When only one framework is employed, classical (Shannon) information theory can be imported unchanged into the quantum domain. A particular case is the macroscopic world of classical physics whose quantum description needs only a single quasiclassical framework. Nontrivial issues unique to quantum information, those with no classical analog, arise when aspects of two or more incompatible frameworks are compared.
Most working scientists hold fast to the concept of realism - a viewpoint according to which an external reality exists independent of observation. But quantum physics has shattered some of our cornerstone beliefs. According to Bells theorem, any theory that is based on the joint assumption of realism and locality (meaning that local events cannot be affected by actions in space-like separated regions) is at variance with certain quantum predictions. Experiments with entangled pairs of particles have amply confirmed these quantum predictions, thus rendering local realistic theories untenable. Maintaining realism as a fundamental concept would therefore necessitate the introduction of spooky actions that defy locality. Here we show by both theory and experiment that a broad and rather reasonable class of such non-local realistic theories is incompatible with experimentally observable quantum correlations. In the experiment, we measure previously untested correlations between two entangled photons, and show that these correlations violate an inequality proposed by Leggett for non-local realistic theories. Our result suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned.