ترغب بنشر مسار تعليمي؟ اضغط هنا

Experimentally bounding deviations from quantum theory in the landscape of generalized probabilistic theories

134   0   0.0 ( 0 )
 نشر من قبل Matthew F. Pusey
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Many experiments in the field of quantum foundations seek to adjudicate between quantum theory and speculative alternatives to it. This requires one to analyze the experimental data in a manner that does not presume the correctness of the quantum formalism. The mathematical framework of generalized probabilistic theories (GPTs) provides a means of doing so. We present a scheme for determining which GPTs are consistent with a given set of experimental data. It proceeds by performing tomography on the preparations and measurements in a self-consistent manner, i.e., without presuming a prior characterization of either. We illustrate the scheme by analyzing experimental data for a large set of preparations and measurements on the polarization degree of freedom of a single photon. We find that the smallest and largest GPT state spaces consistent with our data are a pair of polytopes, each approximating the shape of the Bloch Sphere and having a volume ratio of $0.977 pm 0.001$, which provides a quantitative bound on the scope for deviations from quantum theory. We also demonstrate how our scheme can be used to bound the extent to which nature might be more nonlocal than quantum theory predicts, as well as the extent to which it might be more or less contextual. Specifically, we find that the maximal violation of the CHSH inequality can be at most $1.3% pm 0.1$ greater than the quantum prediction, and the maximal violation of a particular inequality for universal noncontextuality can not differ from the quantum prediction by more than this factor on either side. The most significant loophole in this sort of analysis is that the set of preparations and measurements one implements might fail to be tomographically complete for the system of interest.



قيم البحث

اقرأ أيضاً

If one seeks to test quantum theory against many alternatives in a landscape of possible physical theories, then it is crucial to be able to analyze experimental data in a theory-agnostic way. This can be achieved using the framework of Generalized P robabilistic Theories (GPTs). Here, we implement GPT tomography on a three-level system corresponding to a single photon shared among three modes. This scheme achieves a GPT characterization of each of the preparations and measurements implemented in the experiment without requiring any prior characterization of either. Assuming that the sets of realized preparations and measurements are tomographically complete, our analysis identifies the most likely dimension of the GPT vector space describing the three-level system to be nine, in agreement with the value predicted by quantum theory. Relative to this dimension, we infer the scope of GPTs that are consistent with our experimental data by identifying polytopes that provide inner and outer bounds for the state and effect spaces of the true GPT. From these, we are able to determine quantitative bounds on possible deviations from quantum theory. In particular, we bound the degree to which the no-restriction hypothesis might be violated for our three-level system.
To make precise the sense in which the operational predictions of quantum theory conflict with a classical worldview, it is necessary to articulate a notion of classicality within an operational framework. A widely applicable notion of classicality o f this sort is whether or not the predictions of a given operational theory can be explained by a generalized-noncontextual ontological model. We here explore what notion of classicality this implies for the generalized probabilistic theory (GPT) that arises from a given operational theory, focusing on prepare-measure scenarios. We first show that, when mapping an operational theory to a GPT by quotienting relative to operational equivalences, the constraint of explainability by a generalized-noncontextual ontological model is mapped to the constraint of explainability by an ontological model. We then show that, under the additional assumption that the ontic state space is of finite cardinality, this constraint on the GPT can be expressed as a geometric condition which we term simplex-embeddability. Whereas the traditional notion of classicality for a GPT is that its state space be a simplex and its effect space be the dual of this simplex, simplex-embeddability merely requires that its state space be embeddable in a simplex and its effect space in the dual of that simplex. We argue that simplex-embeddability constitutes an intuitive and freestanding notion of classicality for GPTs. Our result also has applications to witnessing nonclassicality in prepare-measure experiments.
138 - Markus P. Mueller 2020
These lecture notes provide a basic introduction to the framework of generalized probabilistic theories (GPTs) and a sketch of a reconstruction of quantum theory (QT) from simple operational principles. To build some intuition for how physics could b e even more general than quantum, I present two conceivable phenomena beyond QT: superstrong nonlocality and higher-order interference. Then I introduce the framework of GPTs, generalizing both quantum and classical probability theory. Finally, I summarize a reconstruction of QT from the principles of Tomographic Locality, Continuous Reversibility, and the Subspace Axiom. In particular, I show why a quantum bit is described by a Bloch ball, why it is three-dimensional, and how one obtains the complex numbers and operators of the usual representation of QT.
Information transfer in generalized probabilistic theories (GPT) is an important problem. We have dealt with the problem based on repeatability postulate, which generalizes Zureks result to the GPT framework [Phys. Lett. A textbf{379} (2015) 2694]. A natural question arises: can we deduce the information transfer result under weaker assumptions? In this paper, we generalize Zureks result to the framework of GPT using weak repeatability postulate. We show that if distinguishable information can be transferred from a physical system to a series of apparatuses under the weak repeatability postulate in GPT, then the initial states of the physical system must be completely distinguishable. Moreover, after each step of invertible transformation, the composite states of the composite system composed of the physical systems and the apparatuses must also be completely distinguishable.
In this work, we investigate measurement incompatibility in general probabilistic theories (GPTs). We show several equivalent characterizations of compatible measurements. The first is in terms of the positivity of associated maps. The second relates compatibility to the inclusion of certain generalized spectrahedra. For this, we extend the theory of free spectrahedra to ordered vector spaces. The third characterization connects the compatibility of dichotomic measurements to the ratio of tensor crossnorms of Banach spaces. We use these characterizations to study the amount of incompatibility present in different GPTs, i.e. their compatibility regions. For centrally symmetric GPTs, we show that the compatibility degree is given as the ratio of the injective and the projective norm of the tensor product of associated Banach spaces. This allows us to completely characterize the compatibility regions of several GPTs, and to obtain optimal universal bounds on the compatibility degree in terms of the 1-summing constants of the associated Banach spaces. Moreover, we find new bounds on the maximal incompatibility present in more than three qubit measurements.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا