Do you want to publish a course? Click here

Optimal measurement strategies for fast entanglement detection

109   0   0.0 ( 0 )
 Added by Olivier Giraud
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

With the advance of quantum information technology, the question of how to most efficiently test quantum circuits is becoming of increasing relevance. Here we introduce the statistics of lengths of measurement sequences that allows one to certify entanglement across a given bi-partition of a multi-qubit system over the possible sequence of measurements of random unknown states, and identify the best measurement strategies in the sense of the (on average) shortest measurement sequence of (multi-qubit) Pauli measurements. The approach is based on the algorithm of truncated moment sequences that allows one to deal naturally with incomplete information, i.e. information that does not fully specify the quantum state. We find that the set of measurements corresponding to diagonal matrix elements of the moment matrix of the state are particularly efficient. For symmetric states their number grows only like the third power of the number $N$ of qubits. Their efficiency grows rapidly with $N$, leaving already for $N=4$ less than a fraction $10^{-6}$ of randomly chosen entangled states undetected.



rate research

Read More

We investigate measurement-based entanglement purification protocols (EPP) in the presence of local noise and imperfections. We derive a universal, protocol-independent threshold for the required quality of the local resource states, where we show that local noise per particle of up to 24% is tolerable. This corresponds to an increase of the noise threshold by almost an order of magnitude, based on the joint measurement-based implementation of sequential rounds of few-particle EPP. We generalize our results to multipartite EPP, where we encounter similarly high error thresholds.
Quantum repeaters based on atomic ensemble quantum memories are promising candidates for achieving scalable distribution of entanglement over long distances. Recently, important experimental progress has been made towards their implementation. However, the entanglement rates and scalability of current approaches are limited by relatively low retrieval and single-photon detector efficiencies. We propose a scheme, which makes use of fluorescent detection of stored excitations to significantly increase the efficiency of connection and hence the rate. Practical performance and possible experimental realizations of the new protocol are discussed.
We provide several formulas that determine the optimal number of entangled bits (ebits) that a general entanglement-assisted quantum code requires. Our first theorem gives a formula that applies to an arbitrary entanglement-assisted block code. Corollaries of this theorem give formulas that apply to a code imported from two classical binary block codes, to a code imported from a classical quaternary block code, and to a continuous-variable entanglement-assisted quantum block code. Finally, we conjecture two formulas that apply to entanglement-assisted quantum convolutional codes.
We construct a scheme for the preparation, pairwise entanglement via exchange interaction, manipulation, and measurement of individual group-II-like neutral atoms (Yb, Sr, etc.). Group-II-like atoms proffer important advantages over alkali metals, including long-lived optical-transition qubits that enable fast manipulation and measurement. Our scheme provides a promising approach for producing weighted graph states, entangled resources for quantum communication, and possible application to fundamental tests of Bell inequalities that close both detection and locality loopholes.
We consider sequential hypothesis testing between two quantum states using adaptive and non-adaptive strategies. In this setting, samples of an unknown state are requested sequentially and a decision to either continue or to accept one of the two hypotheses is made after each test. Under the constraint that the number of samples is bounded, either in expectation or with high probability, we exhibit adaptive strategies that minimize both types of misidentification errors. Namely, we show that these errors decrease exponentially (in the stopping time) with decay rates given by the measured relative entropies between the two states. Moreover, if we allow joint measurements on multiple samples, the rates are increased to the respective quantum relative entropies. We also fully characterize the achievable error exponents for non-adaptive strategies and provide numerical evidence showing that adaptive measurements are necessary to achieve our bounds under some additional assumptions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا