Do you want to publish a course? Click here

Recursively Adaptive Quantum State Tomography: Theory and Two-qubit Experiment

120   0   0.0 ( 0 )
 Added by Zhibo Hou
 Publication date 2015
  fields Physics
and research's language is English




Ask ChatGPT about the research

Adaptive techniques have important potential for wide applications in enhancing precision of quantum parameter estimation. We present a recursively adaptive quantum state tomography (RAQST) protocol for finite dimensional quantum systems and experimentally implement the adaptive tomography protocol on two-qubit systems. In this RAQST protocol, an adaptive measurement strategy and a recursive linear regression estimation algorithm are performed. Numerical results show that our RAQST protocol can outperform the tomography protocols using mutually unbiased bases (MUB) and the two-stage MUB adaptive strategy even with the simplest product measurements. When nonlocal measurements are available, our RAQST can beat the Gill-Massar bound for a wide range of quantum states with a modest number of copies. We use only the simplest product measurements to implement two-qubit tomography experiments. In the experiments, we use error-compensation techniques to tackle systematic error due to misalignments and imperfection of wave plates, and achieve about 100-fold reduction of the systematic error. The experimental results demonstrate that the improvement of RAQST over nonadaptive tomography is significant for states with a high level of purity. Our results also show that this recursively adaptive tomography method is particularly effective for the reconstruction of maximally entangled states, which are important resources in quantum information.



rate research

Read More

We report an experimental realization of adaptive Bayesian quantum state tomography for two-qubit states. Our implementation is based on the adaptive experimental design strategy proposed in [F.Huszar and N.M.T.Houlsby, Phys.Rev.A 85, 052120 (2012)] and provides an optimal measurement approach in terms of the information gain. We address the practical questions, which one faces in any experimental application: the influence of technical noise, and behavior of the tomographic algorithm for an easy to implement class of factorized measurements. In an experiment with polarization states of entangled photon pairs we observe a lower instrumental noise floor and superior reconstruction accuracy for nearly-pure states of the adaptive protocol compared to a non-adaptive. At the same time we show, that for the mixed states the restriction to factorized measurements results in no advantage for adaptive measurements, so general measurements have to be used.
The precision limit in quantum state tomography is of great interest not only to practical applications but also to foundational studies. However, little is known about this subject in the multiparameter setting even theoretically due to the subtle information tradeoff among incompatible observables. In the case of a qubit, the theoretic precision limit was determined by Hayashi as well as Gill and Massar, but attaining the precision limit in experiments has remained a challenging task. Here we report the first experiment which achieves this precision limit in adaptive quantum state tomography on optical polarization qubits. The two-step adaptive strategy employed in our experiment is very easy to implement in practice. Yet it is surprisingly powerful in optimizing most figures of merit of practical interest. Our study may have significant implications for multiparameter quantum estimation problems, such as quantum metrology. Meanwhile, it may promote our understanding about the complementarity principle and uncertainty relations from the information theoretic perspective.
We investigate quantum state tomography (QST) for pure states and quantum process tomography (QPT) for unitary channels via $adaptive$ measurements. For a quantum system with a $d$-dimensional Hilbert space, we first propose an adaptive protocol where only $2d-1$ measurement outcomes are used to accomplish the QST for $all$ pure states. This idea is then extended to study QPT for unitary channels, where an adaptive unitary process tomography (AUPT) protocol of $d^2+d-1$ measurement outcomes is constructed for any unitary channel. We experimentally implement the AUPT protocol in a 2-qubit nuclear magnetic resonance system. We examine the performance of the AUPT protocol when applied to Hadamard gate, $T$ gate ($pi/8$ phase gate), and controlled-NOT gate, respectively, as these gates form the universal gate set for quantum information processing purpose. As a comparison, standard QPT is also implemented for each gate. Our experimental results show that the AUPT protocol that reconstructing unitary channels via adaptive measurements significantly reduce the number of experiments required by standard QPT without considerable loss of fidelity.
Full quantum state tomography is used to characterize the state of an ensemble based qubit implemented through two hyperfine levels in Pr3+ ions, doped into a Y2SiO5 crystal. We experimentally verify that single-qubit rotation errors due to inhomogeneities of the ensemble can be suppressed using the Roos-Moelmer dark state scheme. Fidelities above >90%, presumably limited by excited state decoherence, were achieved. Although not explicitly taken care of in the Roos-Moelmer scheme, it appears that also decoherence due to inhomogeneous broadening on the hyperfine transition is largely suppressed.
Quantum State Tomography is the task of determining an unknown quantum state by making measurements on identical copies of the state. Current algorithms are costly both on the experimental front -- requiring vast numbers of measurements -- as well as in terms of the computational time to analyze those measurements. In this paper, we address the problem of analysis speed and flexibility, introducing textit{Neural Adaptive Quantum State Tomography} (NA-QST), a machine learning based algorithm for quantum state tomography that adapts measurements and provides orders of magnitude faster processing while retaining state-of-the-art reconstruction accuracy. Our algorithm is inspired by particle swarm optimization and Bayesian particle-filter based adaptive methods, which we extend and enhance using neural networks. The resampling step, in which a bank of candidate solutions -- particles -- is refined, is in our case learned directly from data, removing the computational bottleneck of standard methods. We successfully replace the Bayesian calculation that requires computational time of $O(mathrm{poly}(n))$ with a learned heuristic whose time complexity empirically scales as $O(log(n))$ with the number of copies measured $n$, while retaining the same reconstruction accuracy. This corresponds to a factor of a million speedup for $10^7$ copies measured. We demonstrate that our algorithm learns to work with basis, symmetric informationally complete (SIC), as well as other types of POVMs. We discuss the value of measurement adaptivity for each POVM type, demonstrating that its effect is significant only for basis POVMs. Our algorithm can be retrained within hours on a single laptop for a two-qubit situation, which suggests a feasible time-cost when extended to larger systems. It can also adapt to a subset of possible states, a choice of the type of measurement, and other experimental details.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا