No Arabic abstract
Bayesian inference is a powerful paradigm for quantum state tomography, treating uncertainty in meaningful and informative ways. Yet the numerical challenges associated with sampling from complex probability distributions hampers Bayesian tomography in practical settings. In this Article, we introduce an improved, self-contained approach for Bayesian quantum state estimation. Leveraging advances in machine learning and statistics, our formulation relies on highly efficient preconditioned Crank--Nicolson sampling and a pseudo-likelihood. We theoretically analyze the computational cost, and provide explicit examples of inference for both actual and simulated datasets, illustrating improved performance with respect to existing approaches.
Standard Bayesian credible-region theory for constructing an error region on the unique estimator of an unknown state in general quantum-state tomography to calculate its size and credibility relies on heavy Monte~Carlo sampling of the state space followed by sample rejection. This conventional method typically gives negligible yield for very small error regions originating from large datasets. We propose an operational reformulated theory to compute both size and credibility from region-average quantities that in principle convey information about behavior of these two properties as the credible-region changes. We next suggest the accelerated hit-and-run Monte~Carlo sampling, customized to the construction of Bayesian error-regions, to efficiently compute region-average quantities, and provide its complexity estimates for quantum states. Finally by understanding size as the region-average distance between two states in the region (measured for instance with either the Hilbert-Schmidt, trace-class or Bures distance), we derive approximation formulas to analytically estimate both distance-induced size and credibility under the pseudo-Bloch parametrization without resorting to any Monte~Carlo computation.
In this work we consider practical implementations of Kitaevs algorithm for quantum phase estimation. We analyze the use of phase shifts that simplify the estimation of successive bits in the estimation of unknown phase $varphi$. By using increasingly accurate shifts we reduce the number of measurements to the point where only a single measurements in needed for each additional bit. This results in an algorithm that can estimate $varphi$ to an accuracy of $2^{-(m+2)}$ with probability at least $1-epsilon$ using $N_{epsilon} + m$ measurements, where $N_{epsilon}$ is a constant that depends only on $epsilon$ and the particular sampling algorithm. We present different sampling algorithms and study the exact number of measurements needed through careful numerical evaluation, and provide theoretical bounds and numerical values for $N_{epsilon}$.
In this paper, we explore an efficient online algorithm for quantum state estimation based on a matrix-exponentiated gradient method previously used in the context of machine learning. The state update is governed by a learning rate that determines how much weight is given to the new measurement results obtained in each step. We show convergence of the running state estimate in probability to the true state for both noiseless and noisy measurements. We find that in the latter case the learning rate has to be chosen adaptively and decreasing to guarantee convergence beyond the noise threshold. As a practical alternative we then propose to use running averages of the measurement statistics and a constant learning rate to overcome the noise problem. The proposed algorithm is numerically compared with batch maximum-likelihood and least-squares estimators. The results show a superior performance of the new algorithm in terms of accuracy and runtime complexity.
We demonstrate a fast, robust and non-destructive protocol for quantum state estimation based on continuous weak measurement in the presence of a controlled dynamical evolution. Our experiment uses optically probed atomic spins as a testbed, and successfully reconstructs a range of trial states with fidelities of ~90%. The procedure holds promise as a practical diagnostic tool for the study of complex quantum dynamics, the testing of quantum hardware, and as a starting point for new types of quantum feedback control.
We describe an efficient implementation of Bayesian quantum phase estimation in the presence of noise and multiple eigenstates. The main contribution of this work is the dynamic switching between different representations of the phase distributions, namely truncated Fourier series and normal distributions. The Fourier-series representation has the advantage of being exact in many cases, but suffers from increasing complexity with each update of the prior. This necessitates truncation of the series, which eventually causes the distribution to become unstable. We derive bounds on the error in representing normal distributions with a truncated Fourier series, and use these to decide when to switch to the normal-distribution representation. This representation is much simpler, and was proposed in conjunction with rejection filtering for approximate Bayesian updates. We show that, in many cases, the update can be done exactly using analytic expressions, thereby greatly reducing the time complexity of the updates. Finally, when dealing with a superposition of several eigenstates, we need to estimate the relative weights. This can be formulated as a convex optimization problem, which we solve using a gradient-projection algorithm. By updating the weights at exponentially scaled iterations we greatly reduce the computational complexity without affecting the overall accuracy.