Do you want to publish a course? Click here

Operational framework for quantum measurement simulability

154   0   0.0 ( 0 )
 Added by Leonardo Guerini
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

We introduce a framework for simulating quantum measurements based on classical processing of a set of accessible measurements. Well-known concepts such as joint measurability and projective simulability naturally emerge as particular cases of our framework, but our study also leads to novel results and questions. First, a generalisation of joint measurability is derived, which yields a hierarchy for the incompatibility of sets of measurements. A similar hierarchy is defined based on the number of outcomes used to perform the simulation of a given measurement. This general approach also allows us to identify connections between different types of simulability and, in particular, we characterise the qubit measurements that are projective-simulable in terms of joint measurability. Finally, we discuss how our framework can be interpreted in the context of resource theories.



rate research

Read More

What can one do with a given tunable quantum device? We provide complete symmetry criteria deciding whether some effective target interaction(s) can be simulated by a set of given interactions. Symmetries lead to a better understanding of simulation and permit a reasoning beyond the limitations of the usual explicit Lie closure. Conserved quantities induced by symmetries pave the way to a resource theory for simulability. On a general level, one can now decide equality for any pair of compact Lie algebras just given by their generators without determining the algebras explicitly. Several physical examples are illustrated, including entanglement invariants, the relation to unitary gate membership problems, as well as the central-spin model.
92 - Laurie Letertre 2021
Operational frameworks are very useful to study the foundations of quantum mechanics, and are sometimes used to promote antirealist attitudes towards the theory. The aim of this paper is to review three arguments aiming at defending an antirealist reading of quantum physics based on various developments of standard quantum mechanics appealing to notions such as quantum information, non-causal correlations and indefinite causal orders. Those arguments will be discussed in order to show that they are not convincing. Instead, it is argued that there is conceptually no argument that could favour realist or antirealist attitudes towards quantum mechanics based solely on some features of some formalism. In particular, both realist and antirealist views are well accomodable within operational formulations of the theory. The reason for this is that the realist/antirealist debate is located at a purely epistemic level, which is not engaged by formal aspects of theories. As such, operational formulations of quantum mechanics are epistmologically and ontologically neutral. This discussion aims at clarifying the limits of the historical and methodological affinities between scientific antirealism and operational physics while engaging with recent discoveries in quantum foundations. It also aims at presenting various realist strategies to account for those developments.
Quantum algorithms designed for noisy intermediate-scale quantum devices usually require repeatedly perform a large number of quantum measurements in estimating observable expectation values of a many-qubit quantum state. Exploiting the ideas of importance sampling, observable compatibility, and classical shadows of quantum states, different advanced quantum measurement schemes have been proposed to greatly reduce the large measurement cost. Yet, the underline cost reduction mechanisms seem distinct to each other, and how to systematically find the optimal scheme remains a critical theoretical challenge. Here, we address this challenge by firstly proposing a unified framework of quantum measurements, incorporating the advanced measurement methods as special cases. Our framework further allows us to introduce a general scheme -- overlapped grouping measurement, which simultaneously exploits the advantages of the existing methods. We show that an optimal measurement scheme corresponds to partitioning the observables into overlapped groups with each group consisting of compatible ones. We provide explicit grouping strategies and numerically verify its performance for different molecular Hamiltonians. Our numerical results show great improvements to the overall existing measurement schemes. Our work paves the way for efficient quantum measurement with near-term quantum devices.
Observed quantum correlations are known to determine in certain cases the underlying quantum state and measurements. This phenomenon is known as (quantum) self-testing. Self-testing constitutes a significant research area with practical and theoretical ramifications for quantum information theory. But since its conception two decades ago by Mayers and Yao, the common way to rigorously formulate self-testing has been in terms of operator-algebraic identities, and this formulation lacks an operational interpretation. In particular, it is unclear how to formulate self-testing in other physical theories, in formulations of quantum theory not referring to operator-algebra, or in scenarios causally different from the standard one. In this paper, we explain how to understand quantum self-testing operationally, in terms of causally structured dilations of the input-output channel encoding the correlations. These dilations model side-information which leaks to an environment according to a specific schedule, and we show how self-testing concerns the relative strength between such scheduled leaks of information. As such, the title of our paper has double meaning: we recast conventional quantum self-testing in terms of information-leaks to an environment --- and this realises quantum self-testing as a special case within the surroundings of a general operational framework. Our new approach to quantum self-testing not only supplies an operational understanding apt for various generalisations, but also resolves some unexplained aspects of the existing definition, naturally suggests a distance measure suitable for robust self-testing, and points towards self-testing as a modular concept in a larger, cryptographic perspective.
As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is tomography, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the gauge problem). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا