No Arabic abstract
Adaptive filtering is a powerful class of control theoretic concepts useful in extracting information from noisy data sets or performing forward prediction in time for a dynamic system. The broad utilization of the associated algorithms makes them attractive targets for similar problems in the quantum domain. To date, however, the construction of adaptive filters for quantum systems has typically been carried out in terms of stochastic differential equations for weak, continuous quantum measurements, as used in linear quantum systems such as optical cavities. Discretized measurement models are not as easily treated in this framework, but are frequently employed in quantum information systems leveraging projective measurements. This paper presents a detailed analysis of several technical innovations that enable classical filtering of discrete projective measurements, useful for adaptively learning system-dynamics, noise properties, or hardware performance variations in classically correlated measurement data from quantum devices. In previous work we studied a specific case of this framework, in which noise and calibration errors on qubit arrays could be efficiently characterized in space; here, we present a generalized analysis of filtering in quantum systems and demonstrate that the traditional convergence properties of nonlinear classical filtering hold using single-shot projective measurements. These results are important early demonstrations indicating that a range of concepts and techniques from classical nonlinear filtering theory may be applied to the characterization of quantum systems involving discretized projective measurements, paving the way for broader adoption of control theoretic techniques in quantum technology.
Indistinguishable photons are imperative for advanced quantum communication networks. Indistinguishability is difficult to obtain because of environment-induced photon transformations and loss imparted by communication channels, especially in noisy scenarios. Strategies to mitigate these transformations often require hardware or software overhead that is restrictive (e.g. adding noise), infeasible (e.g. on a satellite), or time-consuming for deployed networks. Here we propose and develop resource-efficient Bayesian optimization techniques to rapidly and adaptively calibrate the indistinguishability of individual photons for quantum networks using only information derived from their measurement. To experimentally validate our approach, we demonstrate the optimization of Hong-Ou-Mandel interference between two photons -- a central task in quantum networking -- finding rapid, efficient, and reliable convergence towards maximal photon indistinguishability in the presence of high loss and shot noise. We expect our resource-optimized and experimentally friendly methodology will allow fast and reliable calibration of indistinguishable quanta, a necessary task in distributed quantum computing, communications, and sensing, as well as for fundamental investigations.
We show that a quantum state may be represented as the sum of a joint probability and a complex quantum modification term. The joint probability and the modification term can both be observed in successive projective measurements. The complex modification term is a measure of measurement disturbance. A selective phase rotation is needed to obtain the imaginary part. This leads to a complex quasiprobability, the Kirkwood distribution. We show that the Kirkwood distribution contains full information about the state if the two observables are maximal and complementary. The Kirkwood distribution gives a new picture of state reduction. In a nonselective measurement, the modification term vanishes. A selective measurement leads to a quantum state as a nonnegative conditional probability. We demonstrate the special significance of the Schwinger basis.
The free-free opacity in plasmas is fundamental to our understanding of energy transport in stellar interiors and for inertial confinement fusion research. However, theoretical predictions in the challenging dense plasma regime are conflicting and there is a dearth of accurate experimental data to allow for direct model validation. Here we present time-resolved transmission measurements in solid-density Al heated by an XUV free-electron laser. We use a novel functional optimization approach to extract the temperature-dependent absorption coefficient directly from an oversampled pool of single-shot measurements, and find a pronounced enhancement of the opacity as the plasma is heated to temperatures of order the Fermi energy. Plasma heating and opacity-enhancement is observed on ultrafast time scales, within the duration of the femtosecond XUV pulse. We attribute further rises in the opacity on ps timescales to melt and the formation of warm-dense matter.
We use discrete-event simulation on a digital computer to study two different models of experimentally realizable quantum walks. The simulation models comply with Einstein locality, are as realistic as the one of the simple random walk in that the particles follow well-defined trajectories, are void of concepts such as particle-wave duality and wave-function collapse, and reproduce the quantum-theoretical results by means of a cause-and-effect, event-by-event process. Our simulation model for the quantum walk experiment presented in [C. Robens et al., Phys. Rev. X 5, 011003 (2015)] reproduces the result of that experiment. Therefore, the claim that the result of the experiment rigorously excludes (i.e., falsifies) any explanation of quantum transport based on classical, well-defined trajectories needs to be revised.
The action of qubit channels on projective measurements on a qubit state is used to establish an equivalence between channels and properties of generalized measurements characterized by bias and sharpness parameters. This can be interpreted as shifting the description of measurement dynamics from the Schrodinger to the Heisenberg picture. In particular, unital quantum channels are shown to induce unbiased measurements. The Markovian channels are found to be equivalent to measurements for which sharpness is a monotonically decreasing function of time. These results are illustrated by considering various noise channels. Further, the effect of bias and sharpness parameters on the energy cost of a measurement and its interplay with non-Markovianity of dynamics is also discussed