Do you want to publish a course? Click here

Scott Continuity in Generalized Probabilistic Theories

47   0   0.0 ( 0 )
 Added by EPTCS
 Publication date 2020
and research's language is English
 Authors Robert Furber




Ask ChatGPT about the research

Scott continuity is a concept from domain theory that had an unexpected previous life in the theory of von Neumann algebras. Scott-continuous states are known as normal states, and normal states are exactly the states coming from density matrices. Given this, and the usefulness of Scott continuity in domain theory, it is natural to ask whether this carries over to generalized probabilistic theories. We show that the answer is no - there are infinite-dimensional convex sets for which the set of Scott-continuous states on the corresponding set of 2-valued POVMs does not recover the original convex set, but is strictly larger. This shows the necessity of the use of topologies for state-effect duality in the general case, rather than purely order theoretic notions.



rate research

Read More

Information transfer in generalized probabilistic theories (GPT) is an important problem. We have dealt with the problem based on repeatability postulate, which generalizes Zureks result to the GPT framework [Phys. Lett. A textbf{379} (2015) 2694]. A natural question arises: can we deduce the information transfer result under weaker assumptions? In this paper, we generalize Zureks result to the framework of GPT using weak repeatability postulate. We show that if distinguishable information can be transferred from a physical system to a series of apparatuses under the weak repeatability postulate in GPT, then the initial states of the physical system must be completely distinguishable. Moreover, after each step of invertible transformation, the composite states of the composite system composed of the physical systems and the apparatuses must also be completely distinguishable.
In this work, we investigate measurement incompatibility in general probabilistic theories (GPTs). We show several equivalent characterizations of compatible measurements. The first is in terms of the positivity of associated maps. The second relates compatibility to the inclusion of certain generalized spectrahedra. For this, we extend the theory of free spectrahedra to ordered vector spaces. The third characterization connects the compatibility of dichotomic measurements to the ratio of tensor crossnorms of Banach spaces. We use these characterizations to study the amount of incompatibility present in different GPTs, i.e. their compatibility regions. For centrally symmetric GPTs, we show that the compatibility degree is given as the ratio of the injective and the projective norm of the tensor product of associated Banach spaces. This allows us to completely characterize the compatibility regions of several GPTs, and to obtain optimal universal bounds on the compatibility degree in terms of the 1-summing constants of the associated Banach spaces. Moreover, we find new bounds on the maximal incompatibility present in more than three qubit measurements.
To make precise the sense in which the operational predictions of quantum theory conflict with a classical worldview, it is necessary to articulate a notion of classicality within an operational framework. A widely applicable notion of classicality of this sort is whether or not the predictions of a given operational theory can be explained by a generalized-noncontextual ontological model. We here explore what notion of classicality this implies for the generalized probabilistic theory (GPT) that arises from a given operational theory, focusing on prepare-measure scenarios. We first show that, when mapping an operational theory to a GPT by quotienting relative to operational equivalences, the constraint of explainability by a generalized-noncontextual ontological model is mapped to the constraint of explainability by an ontological model. We then show that, under the additional assumption that the ontic state space is of finite cardinality, this constraint on the GPT can be expressed as a geometric condition which we term simplex-embeddability. Whereas the traditional notion of classicality for a GPT is that its state space be a simplex and its effect space be the dual of this simplex, simplex-embeddability merely requires that its state space be embeddable in a simplex and its effect space in the dual of that simplex. We argue that simplex-embeddability constitutes an intuitive and freestanding notion of classicality for GPTs. Our result also has applications to witnessing nonclassicality in prepare-measure experiments.
The paper analyzes dynamic epistemic logic from a topological perspective. The main contribution consists of a framework in which dynamic epistemic logic satisfies the requirements for being a topological dynamical system thus interfacing discrete dynamic logics with continuous mappings of dynamical systems. The setting is based on a notion of logical convergence, demonstratively equivalent with convergence in Stone topology. Presented is a flexible, parametrized family of metrics inducing the latter, used as an analytical aid. We show maps induced by action model transformations continuous with respect to the Stone topology and present results on the recurrent behavior of said maps.
Many experiments in the field of quantum foundations seek to adjudicate between quantum theory and speculative alternatives to it. This requires one to analyze the experimental data in a manner that does not presume the correctness of the quantum formalism. The mathematical framework of generalized probabilistic theories (GPTs) provides a means of doing so. We present a scheme for determining which GPTs are consistent with a given set of experimental data. It proceeds by performing tomography on the preparations and measurements in a self-consistent manner, i.e., without presuming a prior characterization of either. We illustrate the scheme by analyzing experimental data for a large set of preparations and measurements on the polarization degree of freedom of a single photon. We find that the smallest and largest GPT state spaces consistent with our data are a pair of polytopes, each approximating the shape of the Bloch Sphere and having a volume ratio of $0.977 pm 0.001$, which provides a quantitative bound on the scope for deviations from quantum theory. We also demonstrate how our scheme can be used to bound the extent to which nature might be more nonlocal than quantum theory predicts, as well as the extent to which it might be more or less contextual. Specifically, we find that the maximal violation of the CHSH inequality can be at most $1.3% pm 0.1$ greater than the quantum prediction, and the maximal violation of a particular inequality for universal noncontextuality can not differ from the quantum prediction by more than this factor on either side. The most significant loophole in this sort of analysis is that the set of preparations and measurements one implements might fail to be tomographically complete for the system of interest.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا