No Arabic abstract
We review the continuous monitoring of a qubit through its spontaneous emission, at an introductory level. Contemporary experiments have been able to collect the fluorescence of an artificial atom in a cavity and transmission line, and then make measurements of that emission to obtain diffusive quantum trajectories in the qubits state. We give a straightforward theoretical overview of such scenarios, using a framework based on Kraus operators derived from a Bayesian update concept; we apply this flexible framework across common types of measurements including photodetection, homodyne, and heterodyne monitoring, and illustrate its equivalence to the stochastic master equation formalism throughout. Special emphasis is given to homodyne (phase-sensitive) monitoring of fluorescence. The examples we develop are used to illustrate basic methods in quantum trajectories, but also to introduce some more advanced topics of contemporary interest, including the arrow of time in quantum measurement, and trajectories following optimal measurement records derived from a variational principle. The derivations we perform lead directly from the development of a simple model to an understanding of recent experimental results.
We discuss recent developments in measurement protocols that generate quantum entanglement between two remote qubits, focusing on the theory of joint continuous detection of their spontaneous emission. We consider a device geometry similar to that used in well-known Bell-state measurements, which we analyze using a conceptually transparent model of stochastic quantum trajectories; we use this to review photodetection, the most straightforward case, and then generalize to the diffusive trajectories from homodyne and heterodyne detection as well. Such quadrature measurement schemes are a realistic two-qubit extension of existing circuit-QED experiments which obtain quantum trajectories by homodyning or heterodyning a superconducting qubits spontaneous emission, or an adaptation of existing optical measurement schemes to obtain jump trajectories from emitters. We mention key results, presented from within a single theoretical framework, and draw connections to concepts in the wider literature on entanglement generation by measurement (such as path information erasure and entanglement swapping). The photon which-path information acquisition, and therefore the two-qubit entanglement yield, is tunable under the homodyne detection scheme we discuss, at best generating equivalent average entanglement dynamics as in the comparable photodetection case. In addition to deriving this known equivalence, we extend past analyses in our characterization of the measurement dynamics: We include derivations of bounds on the fastest possible evolution towards a Bell state under joint homodyne measurement dynamics, and characterize the maximal entanglement yield possible using inefficient (lossy) measurements.
We study a disordered ensemble of quantum emitters collectively coupled to a lossless cavity mode. The latter is found to modify the localization properties of the dark eigenstates, which exhibit a character of being localized on multiple, noncontiguous sites. We denote such states as semi-localized and characterize them by means of standard localization measures. We show that those states can very efficiently contribute to coherent energy transport. Our paper underlines the important role of dark states in systems with strong light-matter coupling.
Normalized correlation functions provide expedient means for determining the photon-number properties of light. These higher-order moments, also called the normalized factorial moments of photon number, can be utilized both in the fast state classification and in-depth state characterization. Further, non-classicality criteria have been derived based on their properties. Luckily, the measurement of the normalized higher-order moments is often loss-independent making their observation with lossy optical setups and imperfect detectors experimentally appealing. The normalized higher-order moments can for example be extracted from the photon-number distribution measured with a true photon-number-resolving detector or accessed directly via manifold coincidence counting in the spirit of the Hanbury Brown and Twiss experiment. Alternatively, they can be inferred via homodyne detection. Here, we provide an overview of different kind of state classification and characterization tasks that take use of normalized higher-order moments and consider different aspects in measuring them with free-traveling light.
We demonstrate a general method to measure the quantum state of an angular momentum of arbitrary magnitude. The (2F+1) x (2F+1) density matrix is completely determined from a set of Stern-Gerlach measurements with (4F+1) different orientations of the quantization axis. We implement the protocol for laser cooled Cesium atoms in the 6S_{1/2}(F=4) hyperfine ground state and apply it to a variety of test states prepared by optical pumping and Larmor precession. A comparison of input and measured states shows typical reconstruction fidelities of about 0.95.
Conventionally, unknown quantum states are characterized using quantum-state tomography based on strong or weak measurements carried out on an ensemble of identically prepared systems. By contrast, the use of protective measurements offers the possibility of determining quantum states from a series of weak, long measurements performed on a single system. Because the fidelity of a protectively measured quantum state is determined by the amount of state disturbance incurred during each protective measurement, it is crucial that the initial quantum state of the system is disturbed as little as possible. Here we show how to systematically minimize the state disturbance in the course of a protective measurement, thus enabling the maximization of the fidelity of the quantum-state measurement. Our approach is based on a careful tuning of the time dependence of the measurement interaction and is shown to be dramatically more effective in reducing the state disturbance than the previously considered strategy of weakening the measurement strength and increasing the measurement time. We describe a method for designing the measurement interaction such that the state disturbance exhibits polynomial decay to arbitrary order in the inverse measurement time $1/T$. We also show how one can achieve even faster, subexponential decay, and we find that it represents the smallest possible state disturbance in a protective measurement. In this way, our results show how to optimally measure the state of a single quantum system using protective measurements.