ﻻ يوجد ملخص باللغة العربية
We formulate entropic measurements uncertainty relations (MURs) for a spin-1/2 system. When incompatible observables are approximatively jointly measured, we use relative entropy to quantify the information lost in approximation and we prove positive lower bounds for such a loss: there is an unavoidable information loss. Firstly we allow only for covariant approximate joint measurements and we find state-dependent MURs for two or three orthogonal spin-1/2 components. Secondly we consider any possible approximate joint measurement and we find state-independent MURs for two or three spin-1/2 components. In particular we study how MURs depend on the angle between two spin directions. Finally, we extend our approach to infinitely many incompatible observables, namely to the spin components in all possible directions. In every scenario, we always consider also the characterization of the optimal approximate joint measurements.
We establish uncertainty relations between information loss in general open quantum systems and the amount of non-ergodicity of the corresponding dynamics. The relations hold for arbitrary quantum systems interacting with an arbitrary quantum environ
How violently do two quantum operators disagree? Different fields of physics feature different measures of incompatibility: (i) In quantum information theory, entropic uncertainty relations constrain measurement outcomes. (ii) In condensed matter and
We derive entropic uncertainty relations for successive generalized measurements by using general descriptions of quantum measurement within two {distinctive operational} scenarios. In the first scenario, by merging {two successive measurements} into
Heisenbergs uncertainty principle has recently led to general measurement uncertainty relations for quantum systems: incompatible observables can be measured jointly or in sequence only with some unavoidable approximation, which can be quantified in
We formulate a new error-disturbance relation, which is free from explicit dependence upon variances in observables. This error-disturbance relation shows improvement over the one provided by the Branciard inequality and the Ozawa inequality for some