No Arabic abstract
Quantum state smoothing is a technique to construct an estimate of the quantum state at a particular time, conditioned on a measurement record from both before and after that time. The technique assumes that an observer, Alice, monitors part of the environment of a quantum system and that the remaining part of the environment, unobserved by Alice, is measured by a secondary observer, Bob, who may have a choice in how he monitors it. The effect of Bobs measurement choice on the effectiveness of Alices smoothing has been studied in a number of recent papers. Here we expand upon the Letter which introduced linear Gaussian quantum (LGQ) state smoothing [Phys. Rev. Lett., 122, 190402 (2019)]. In the current paper we provide a more detailed derivation of the LGQ smoothing equations and address an open question about Bobs optimal measurement strategy. Specifically, we develop a simple hypothesis that allows one to approximate the optimal measurement choice for Bob given Alices measurement choice. By optimal choice we mean the choice for Bob that will maximize the purity improvement of Alices smoothed state compared to her filtered state (an estimated state based only on Alices past measurement record). The hypothesis, that Bob should choose his measurement so that he observes the back-action on the system from Alices measurement, seems contrary to ones intuition about quantum state smoothing. Nevertheless we show that it works even beyond a linear Gaussian setting.
Quantum state smoothing is a technique for estimating the quantum state of a partially observed quantum system at time $tau$, conditioned on an entire observed measurement record (both before and after $tau$). However, this smoothing technique requires an observer (Alice, say) to know the nature of the measurement records that are unknown to her in order to characterize the possible true states for Bobs (say) systems. If Alice makes an incorrect assumption about the set of true states for Bobs system, she will obtain a smoothed state that is suboptimal, and, worse, may be unrealizable (not corresponding to a valid evolution for the true states) or even unphysical (not represented by a state matrix $rhogeq0$). In this paper, we review the historical background to quantum state smoothing, and list general criteria a smoothed quantum state should satisfy. Then we derive, for the case of linear Gaussian quantum systems, a necessary and sufficient constraint for realizability on the covariance matrix of the true state. Naturally, a realizable covariance of the true state guarantees a smoothed state which is physical. It might be thought that any putative true covariance which gives a physical smoothed state would be a realizable true covariance, but we show explicitly that this is not so. This underlines the importance of the realizabilty constraint.
Here, we are concerned with comparing estimation schemes for the quantum state under continuous measurement (quantum trajectories), namely quantum state filtering and, as introduced by us [Phys. Rev. Lett. 115, 180407 (2015)], quantum state smoothing. Unfortunately, the cumulative errors in the most typical simulations of quantum trajectories with a total time of simulation $T$ can reach orders of $T Delta t$. Moreover, these errors may correspond to deviations from valid quantum evolution as described by a completely positive map. Here we introduce a higher-order method that reduces the cumulative errors in the complete positivity of the evolution to of order $TDelta t^2$, whether for linear (unnormalised) or nonlinear (normalised) quantum trajectories. Our method also guarantees that the discrepancy in the average evolution between different detection methods (different `unravellings, such as quantum jumps or quantum diffusion) is similarly small. This equivalence is essential for comparing quantum state filtering to quantum state smoothing, as the latter assumes that all irreversible evolution is unravelled, although the estimator only has direct knowledge of some records. In particular, here we compare, for the first time, the average difference between filtering and smoothing conditioned on an event of which the estimator lacks direct knowledge: a photon detection within a certain time window. We find that the smoothed state is actually {em less pure}, both before and after the time of the jump. Similarly, the fidelity of the smoothed state with the `true (maximal knowledge) state is also lower than that of the filtered state before the jump. However, after the jump, the fidelity of the smoothed state is higher.
Quantum state smoothing is a technique to estimate an unknown true state of an open quantum system based on partial measurement information both prior and posterior to the time of interest. In this paper, we show that the smoothed quantum state is an optimal state estimator; that is, it minimizes a risk (expected cost) function. Specifically, we show that the smoothed quantum state is optimal with respect to two cost functions: the trace-square deviation from and the relative entropy to the unknown true state. However, when we consider a related risk function, the linear infidelity, we find, contrary to what one might expect, that the smoothed state is not optimal. For this case, we derive the optimal state estimator, which we call the lustrated smoothed state. It is a pure state, the eigenstate of the smoothed quantum state with the largest eigenvalue.
We develop a practical quantum tomography protocol and implement measurements of pure states of ququarts realized with polarization states of photon pairs (biphotons). The method is based on an optimal choice of the measuring schemes parameters that provides better quality of reconstruction for the fixed set of statistical data. A high accuracy of the state reconstruction (above 0.99) indicates that developed methodology is adequate.
Rather than point estimators, states of a quantum system that represent ones best guess for the given data, we consider optimal regions of estimators. As the natural counterpart of the popular maximum-likelihood point estimator, we introduce the maximum-likelihood region---the region of largest likelihood among all regions of the same size. Here, the size of a region is its prior probability. Another concept is the smallest credible region---the smallest region with pre-chosen posterior probability. For both optimization problems, the optimal region has constant likelihood on its boundary. We discuss criteria for assigning prior probabilities to regions, and illustrate the concepts and methods with several examples.