No Arabic abstract
We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.
A central result in the foundations of quantum mechanics is the Kochen-Specker theorem. In short, it states that quantum mechanics is in conflict with classical models in which the result of a measurement does not depend on which other compatible measurements are jointly performed. Here, compatible measurements are those that can be performed simultaneously or in any order without disturbance. This conflict is generically called quantum contextuality. In this article, we present an introduction to this subject and its current status. We review several proofs of the Kochen-Specker theorem and different notions of contextuality. We explain how to experimentally test some of these notions and discuss connections between contextuality and nonlocality or graph theory. Finally, we review some applications of contextuality in quantum information processing.
In addition to the important role of contextuality in foundations of quantum theory, this intrinsically quantum property has been identified as a potential resource for quantum advantage in different tasks. It is thus of fundamental importance to study contextuality from the point of view of resource theories, which provide a powerful framework for the formal treatment of a property as an operational resource. In this contribution we review recent developments towards a resource theory of contextuality and connections with operational applications of this property.
Incompatibility of quantum measurements is of fundamental importance in quantum mechanics. It is closely related to many nonclassical phenomena such as Bell nonlocality, quantum uncertainty relations, and quantum steering. We study the necessary and sufficient conditions of quantum compatibility for a given collection of $n$ measurements in $d$-dimensional space. From the compatibility criterion for two-qubit measurements, we compute the incompatibility probability of a pair of independent random measurements. For a pair of unbiased random qubit measurements, we derive that the incompatibility probability is exactly $frac35$. Detailed results are also presented in figures for pairs of general qubit measurements.
The Born rule, a foundational axiom used to deduce probabilities of events from wavefunctions, is indispensable in the everyday practice of quantum physics. It is also key in the quest to reconcile the ostensibly inconsistent laws of the quantum and classical realms, as it confers physical significance to reduced density matrices, the essential tools of decoherence theory. Following Bohrs Copenhagen interpretation, textbooks postulate the Born rule outright. However, recent attempts to derive it from other quantum principles have been successful, holding promise for simplifying and clarifying the quantum foundational bedrock. A major family of derivations is based on envariance, a recently discovered symmetry of entangled quantum states. Here, we identify and experimentally test three premises central to these envariance-based derivations, thus demonstrating, in the microworld, the symmetries from which the Born rule is derived. Further, we demonstrate envariance in a purely local quantum system, showing its independence from relativistic causality.
Starting from arbitrary sets of quantum states and measurements, referred to as the prepare-and-measure scenario, a generalized Spekkens non-contextual ontological model representation of the quantum statistics associated to the prepare-and-measure scenario is constructed. The generalization involves the new notion of a reduced space which is non-trivial for non-tomographically complete scenarios. A new mathematical criterion, called unit separability, is formulated as the relevant classicality criterion -- the name is inspired by the usual notion of quantum state separability. Using this criterion, we derive a new upper bound on the cardinality of the ontic space. Then, we recast the unit separability criterion as a (possibly infinite) set of linear constraints, from which two separate converging hierarchies of algorithmic tests to witness non-classicality or certify classicality are obtained. We relate the complexity of these algorithmic tests to that of a class of vertex enumeration problems. Finally, we reformulate our results in the framework of generalized probabilistic theories and discuss the implications for simplex-embeddability in such theories.