Do you want to publish a course? Click here

The Multi-round Process Matrix

57   0   0.0 ( 0 )
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We develop an extension of the process matrix (PM) framework for correlations between quantum operations with no causal order that allows multiple rounds of information exchange for each party compatibly with the assumption of well-defined causal order of events locally. We characterise the higher-order process describing such correlations, which we name the multi-round process matrix (MPM), and formulate a notion of causal nonseparability for it that extends the one for standard PMs. We show that in the multi-round case there are novel manifestations of causal nonseparability that are not captured by a naive application of the standard PM formalism: we exhibit an instance of an operator that is both a valid PM and a valid MPM, but is causally separable in the first case and can violate causal inequalities in the second case due to the possibility of using a side channel.

rate research

Read More

A single-party strategy in a multi-round quantum protocol can be implemented by sequential networks of quantum operations connected by internal memories. Here provide the most efficient realization in terms of computational-space resources.
Post-processing is a significant step in quantum key distribution(QKD), which is used for correcting the quantum-channel noise errors and distilling identical corrected keys between two distant legitimate parties. Efficient error reconciliation protocol, which can lead to an increase in the secure key generation rate, is one of the main performance indicators of QKD setups. In this paper, we propose a multi-low-density parity-check codes based reconciliation scheme, which can provide remarkable perspectives for highly efficient information reconciliation. With testing our approach through data simulation, we show that the proposed scheme combining multi-syndrome-based error rate estimation allows a more accurate estimation about the error rate as compared with random sampling and single-syndrome estimation techniques before the error correction, as well as a significant increase in the efficiency of the procedure without compromising security and sacrificing reconciliation efficiency.
In a recent breakthrough, Mahadev constructed a classical verification of quantum computation (CVQC) protocol for a classical client to delegate decision problems in BQP to an untrusted quantum prover under computational assumptions. In this work, we explore further the feasibility of CVQC with the more general sampling problems in BQP and with the desirable blindness property. We contribute affirmative solutions to both as follows. (1) Motivated by the sampling nature of many quantum applications (e.g., quantum algorithms for machine learning and quantum supremacy tasks), we initiate the study of CVQC for quantum sampling problems (denoted by SampBQP). More precisely, in a CVQC protocol for a SampBQP problem, the prover and the verifier are given an input $xin {0,1}^n$ and a quantum circuit $C$, and the goal of the classical client is to learn a sample from the output $z leftarrow C(x)$ up to a small error, from its interaction with an untrusted prover. We demonstrate its feasibility by constructing a four-message CVQC protocol for SampBQP based on the quantum Learning With Error assumption. (2) The blindness of CVQC protocols refers to a property of the protocol where the prover learns nothing, and hence is blind, about the clients input. It is a highly desirable property that has been intensively studied for the delegation of quantum computation. We provide a simple yet powerful generic compiler that transforms any CVQC protocol to a blind one while preserving its completeness and soundness errors as well as the number of rounds. Applying our compiler to (a parallel repetition of) Mahadevs CVQC protocol for BQP and our CVQC protocol for SampBQP yields the first constant-round blind CVQC protocol for BQP and SampBQP respectively, with negligible completeness and soundness errors.
We develop a general approach for monitoring and controlling evolution of open quantum systems. In contrast to the master equations describing time evolution of density operators, here, we formulate a dynamical equation for the evolution of the process matrix acting on a system. This equation is applicable to non-Markovian and/or strong coupling regimes. We propose two distinct applications for this dynamical equation. We first demonstrate identification of quantum Hamiltonians generating dynamics of closed or open systems via performing process tomography. In particular, we argue how one can efficiently estimate certain classes of sparse Hamiltonians by performing partial tomography schemes. In addition, we introduce a novel optimal control theoretic setting for manipulating quantum dynamics of Hamiltonian systems, specifically for the task of decoherence suppression.
We start with the task of discriminating finitely many multipartite quantum states using LOCC protocols, with the goal to optimize the probability of correctly identifying the state. We provide two different methods to show that finitely many measurement outcomes in every step are sufficient for approaching the optimal probability of discrimination. In the first method, each measurement of an optimal LOCC protocol, applied to a $d_{rm loc}$-dim local system, is replaced by one with at most $2d_{rm loc}^2$ outcomes, without changing the probability of success. In the second method, we decompose any LOCC protocol into a convex combination of a number of slim protocols in which each measurement applied to a $d_{rm loc}$-dim local system has at most $d_{rm loc}^2$ outcomes. To maximize any convex functions in LOCC (including the probability of state discrimination or fidelity of state transformation), an optimal protocol can be replaced by the best slim protocol in the convex decomposition without using shared randomness. For either method, the bound on the number of outcomes per measurement is independent of the global dimension, the number of parties, the depth of the protocol, how deep the measurement is located, and applies to LOCC protocols with infinite rounds, and the measurement compression can be done top-down -- independent of later operations in the LOCC protocol. The second method can be generalized to implement LOCC instruments with finitely many outcomes: if the instrument has $n$ coarse-grained final measurement outcomes, global input dimension $D_0$ and global output dimension $D_i$ for $i=1,...,n$ conditioned on the $i$-th outcome, then one can obtain the instrument as a convex combination of no more than $R=sum_{i=1}^n D_0^2 D_i^2 - D_0^2 + 1$ slim protocols; in other words, $log_2 R$ bits of shared randomess suffice.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا