No Arabic abstract
A central result in the foundations of quantum mechanics is the Kochen-Specker theorem. In short, it states that quantum mechanics is in conflict with classical models in which the result of a measurement does not depend on which other compatible measurements are jointly performed. Here, compatible measurements are those that can be performed simultaneously or in any order without disturbance. This conflict is generically called quantum contextuality. In this article, we present an introduction to this subject and its current status. We review several proofs of the Kochen-Specker theorem and different notions of contextuality. We explain how to experimentally test some of these notions and discuss connections between contextuality and nonlocality or graph theory. Finally, we review some applications of contextuality in quantum information processing.
We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.
In addition to the important role of contextuality in foundations of quantum theory, this intrinsically quantum property has been identified as a potential resource for quantum advantage in different tasks. It is thus of fundamental importance to study contextuality from the point of view of resource theories, which provide a powerful framework for the formal treatment of a property as an operational resource. In this contribution we review recent developments towards a resource theory of contextuality and connections with operational applications of this property.
Starting from arbitrary sets of quantum states and measurements, referred to as the prepare-and-measure scenario, a generalized Spekkens non-contextual ontological model representation of the quantum statistics associated to the prepare-and-measure scenario is constructed. The generalization involves the new notion of a reduced space which is non-trivial for non-tomographically complete scenarios. A new mathematical criterion, called unit separability, is formulated as the relevant classicality criterion -- the name is inspired by the usual notion of quantum state separability. Using this criterion, we derive a new upper bound on the cardinality of the ontic space. Then, we recast the unit separability criterion as a (possibly infinite) set of linear constraints, from which two separate converging hierarchies of algorithmic tests to witness non-classicality or certify classicality are obtained. We relate the complexity of these algorithmic tests to that of a class of vertex enumeration problems. Finally, we reformulate our results in the framework of generalized probabilistic theories and discuss the implications for simplex-embeddability in such theories.
Contextuality is a non-classical behaviour that can be exhibited by quantum systems. It is increasingly studied for its relationship to quantum-over-classical advantages in informatic tasks. To date, it has largely been studied in discrete variable scenarios, where observables take values in discrete and usually finite sets. Practically, on the other hand, continuous-variable scenarios offer some of the most promising candidates for implementing quantum computations and informatic protocols. Here we set out a framework for treating contextuality in continuous-variable scenarios. It is shown that the Fine--Abramsky--Brandenburger theorem extends to this setting, an important consequence of which is that nonlocality can be viewed as a special case of contextuality, as in the discrete case. The contextual fraction, a quantifiable measure of contextuality that bears a precise relationship to Bell inequality violations and quantum advantages, can also be defined in this setting. It is shown to be a non-increasing monotone with respect to classical operations that include binning to discretise data. Finally, we consider how the contextual fraction can be formulated as an infinite linear program, and calculated with increasing accuracy using semi-definite programming approximations.
In quantum physics the term `contextual can be used in more than one way. One usage, here called `Bell contextual since the idea goes back to Bell, is that if $A$, $B$ and $C$ are three quantum observables, with $A$ compatible (i.e., commuting) with $B$ and also with $C$, whereas $B$ and $C$ are incompatible, a measurement of $A$ might yield a different result (indicating that quantum mechanics is contextual) depending upon whether $A$ is measured along with $B$ (the ${A,B}$ context) or with $C$ (the ${A,C}$ context). An analysis of what projective quantum measurements measure shows that quantum theory is Bell noncontextual: the outcome of a particular $A$ measurement when $A$ is measured along with $B$ would have been exactly the same if $A$ had, instead, been measured along with $C$. A different definition, here called `globally (non)contextual refers to whether or not there is (noncontextual) or is not (contextual) a single joint probability distribution that simultaneously assigns probabilities in a consistent manner to the outcomes of measurements of a certain collection of observables, not all of which are compatible. A simple example shows that such a joint probability distribution can exist even in a situation where the measurement probabilities cannot refer to properties of a quantum system, and hence lack physical significance, even though mathematically well-defined. It is noted that the quantum sample space, a projective decomposition of the identity, required for interpreting measurements of incompatible properties in different runs of an experiment using different types of apparatus has a tensor product structure, a fact sometimes overlooked.