Do you want to publish a course? Click here

Optimal error regions for quantum state estimation

175   0   0.0 ( 0 )
 Publication date 2013
  fields Physics
and research's language is English




Ask ChatGPT about the research

Rather than point estimators, states of a quantum system that represent ones best guess for the given data, we consider optimal regions of estimators. As the natural counterpart of the popular maximum-likelihood point estimator, we introduce the maximum-likelihood region---the region of largest likelihood among all regions of the same size. Here, the size of a region is its prior probability. Another concept is the smallest credible region---the smallest region with pre-chosen posterior probability. For both optimization problems, the optimal region has constant likelihood on its boundary. We discuss criteria for assigning prior probabilities to regions, and illustrate the concepts and methods with several examples.



rate research

Read More

69 - Sisi Zhou , Liang Jiang 2019
For a generic set of Markovian noise models, the estimation precision of a parameter associated with the Hamiltonian is limited by the $1/sqrt{t}$ scaling where $t$ is the total probing time, in which case the maximal possible quantum improvement in the asymptotic limit of large $t$ is restricted to a constant factor. However, situations arise where the constant factor improvement could be significant, yet no effective quantum strategies are known. Here we propose an optimal approximate quantum error correction (AQEC) strategy asymptotically saturating the precision lower bound in the most general adaptive parameter estimation scheme where arbitrary and frequent quantum controls are allowed. We also provide an efficient numerical algorithm finding the optimal code. Finally, we consider highly-biased noise and show that using the optimal AQEC strategy, strong noises are fully corrected, while the estimation precision depends only on the strength of weak noises in the limiting case.
72 - Jun Suzuki 2020
In this paper, we study the quantum-state estimation problem in the framework of optimal design of experiments. We first find the optimal designs about arbitrary qubit models for popular optimality criteria such as A-, D-, and E-optimal designs. We also give the one-parameter family of optimality criteria which includes these criteria. We then extend a classical result in the design problem, the Kiefer-Wolfowitz theorem, to a qubit system showing the D-optimal design is equivalent to a certain type of the A-optimal design. We next compare and analyze several optimal designs based on the efficiency. We explicitly demonstrate that an optimal design for a certain criterion can be highly inefficient for other optimality criteria.
By using a systematic optimization approach we determine quantum states of light with definite photon number leading to the best possible precision in optical two mode interferometry. Our treatment takes into account the experimentally relevant situation of photon losses. Our results thus reveal the benchmark for precision in optical interferometry. Although this boundary is generally worse than the Heisenberg limit, we show that the obtained precision beats the standard quantum limit thus leading to a significant improvement compared to classical interferometers. We furthermore discuss alternative states and strategies to the optimized states which are easier to generate at the cost of only slightly lower precision.
We develop a practical quantum tomography protocol and implement measurements of pure states of ququarts realized with polarization states of photon pairs (biphotons). The method is based on an optimal choice of the measuring schemes parameters that provides better quality of reconstruction for the fixed set of statistical data. A high accuracy of the state reconstruction (above 0.99) indicates that developed methodology is adequate.
We consider error correction in quantum key distribution. To avoid that Alice and Bob unwittingly end up with different keys precautions must be taken. Before running the error correction protocol, Bob and Alice normally sacrifice some bits to estimate the error rate. To reduce the probability that they end up with different keys to an acceptable level, we show that a large number of bits must be sacrificed. Instead, if Alice and Bob can make a good guess about the error rate before the error correction, they can verify that their keys are similar after the error correction protocol. This verification can be done by utilizing properties of Low Density Parity Check codes used in the error correction. We compare the methods and show that by verification it is often possible to sacrifice less bits without compromising security. The improvement is heavily dependent on the error rate and the block length, but for a key produced by the IdQuantique system Clavis^2, the increase in the key rate is approximately 5 percent. We also show that for systems with large fluctuations in the error rate a combination of the two methods is optimal.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا