No Arabic abstract
We show that including both the system and the apparatus in the quantum description of the measurement process, and using the concept of conditional probabilities, it is possible to deduce the statistical operator of the system after a measurement with a given result, which gives the probability distribution for all possible consecutive measurements on the system. This statistical operator, representing the state of the system after the first measurement, is in general not the same that would be obtained using the postulate of collapse.
Considering a minimal number of assumptions and in the context of the timeless formalism, we derive conditional probabilities for subsequent measurements in the non-relativistic regime. Only unitary transformations are considered with detection processes described by generalized measurements (POVM). One-time conditional probabilities are univocally and unambiguous derived via the Gleason-Bush theorem, also in puzzling cases like the Wigners friend scenario where their form underlines the relativity aspect of measurements. No paradoxical situations emerge and the roles of Wigner and the friend are completely interchangeable. In particular, Wigner can be seen as a superimposed of states from his/her friend.
We use a novel form of quantum conditional probability to define new measures of quantum information in a dynamical context. We explore relationships between our new quantities and standard measures of quantum information such as von Neumann entropy. These quantities allow us to find new proofs of some standard results in quantum information theory, such as the concavity of von Neumann entropy and Holevos theorem. The existence of an underlying probability distribution helps to shed some light on the conceptual underpinnings of these results.
We investigate the effect of conditional null measurements on a quantum system and find a rich variety of behaviors. Specifically, quantum dynamics with a time independent $H$ in a finite dimensional Hilbert space are considered with repeated strong null measurements of a specified state. We discuss four generic behaviors that emerge in these monitored systems. The first arises in systems without symmetry, along with their associated degeneracies in the energy spectrum, and hence in the absence of dark states as well. In this case, a unique final state can be found which is determined by the largest eigenvalue of the survival operator, the non-unitary operator encoding both the unitary evolution between measurements and the measurement itself. For a three-level system, this is similar to the well known shelving effect. Secondly, for systems with built-in symmetry and correspondingly a degenerate energy spectrum, the null measurements dynamically select the degenerate energy levels, while the non-degenerate levels are effectively wiped out. Thirdly, in the absence of dark states, and for specific choices of parameters, two or more eigenvalues of the survival operator match in magnitude, and this leads to an oscillatory behavior controlled by the measurement rate and not solely by the energy levels. Finally, when the control parameters are tuned, such that the eigenvalues of the survival operator all coalesce to zero, one has exceptional points that corresponds to situations that violate the null measurement condition, making the conditional measurement process impossible.
Conditional probabilities are a core concept in machine learning. For example, optimal prediction of a label $Y$ given an input $X$ corresponds to maximizing the conditional probability of $Y$ given $X$. A common approach to inference tasks is learning a model of conditional probabilities. However, these models are often based on strong assumptions (e.g., log-linear models), and hence their estimate of conditional probabilities is not robust and is highly dependent on the validity of their assumptions. Here we propose a framework for reasoning about conditional probabilities without assuming anything about the underlying distributions, except knowledge of their second order marginals, which can be estimated from data. We show how this setting leads to guaranteed bounds on conditional probabilities, which can be calculated efficiently in a variety of settings, including structured-prediction. Finally, we apply them to semi-supervised deep learning, obtaining results competitive with variational autoencoders.
Conditional probabilities in quantum systems which have both initial and final boundary conditions are commonly evaluated using the Aharonov-Bergmann-Lebowitz rule. In this short note we present a seemingly disturbing paradox that appears when applying the rule to systems with slightly broken degeneracies. In these cases we encounter a singular limit - the probability jumps when going from perfect degeneracy to negligibly broken one. We trace the origin of the paradox and solve it from both traditional and modern perspectives in order to highlight the physics behind it: the necessity to take into account the finite resolution of the measuring device. As a practical example, we study the application of the rule to the Zeeman effect. The analysis presented here may stress the general need to first consider the governing physical principles before heading to the mathematical formalism, in particular when exploring puzzling quantum phenomena.