No Arabic abstract
One-shot information theory entertains a plethora of entropic quantities, such as the smooth max-divergence, hypothesis testing divergence and information spectrum divergence, that characterize various operational tasks and are used to prove the asymptotic behavior of various tasks in quantum information theory. Tight inequalities between these quantities are thus of immediate interest. In this note we use a minimax approach (appearing previously for example in the proofs of the quantum substate theorem), to simplify the quantum problem to a commutative one, which allows us to derive such inequalities. Our derivations are conceptually different from previous arguments and in some cases lead to tighter relations. We hope that the approach discussed here can lead to progress in open problems in quantum Shannon theory, and exemplify this by applying it to a simple case of the joint smoothing problem.
Instrumental variables allow the estimation of cause and effect relations even in presence of unobserved latent factors, thus providing a powerful tool for any science wherein causal inference plays an important role. More recently, the instrumental scenario has also attracted increasing attention in quantum physics, since it is related to the seminal Bells theorem and in fact allows the detection of even stronger quantum effects, thus enhancing our current capabilities to process information and becoming a valuable tool in quantum cryptography. In this work, we further explore this bridge between causality and quantum theory and apply a technique, originally developed in the field of quantum foundations, to express the constraints implied by causal relations in the language of graph theory. This new approach can be applied to any causal model containing a latent variable. Here, by focusing on the instrumental scenario, it allows us to easily reproduce known results as well as obtain new ones and gain new insights on the connections and differences between the instrumental and the Bell scenarios.
Given a task of predicting $Y$ from $X$, a loss function $L$, and a set of probability distributions $Gamma$ on $(X,Y)$, what is the optimal decision rule minimizing the worst-case expected loss over $Gamma$? In this paper, we address this question by introducing a generalization of the principle of maximum entropy. Applying this principle to sets of distributions with marginal on $X$ constrained to be the empirical marginal from the data, we develop a general minimax approach for supervised learning problems. While for some loss functions such as squared-error and log loss, the minimax approach rederives well-knwon regression models, for the 0-1 loss it results in a new linear classifier which we call the maximum entropy machine. The maximum entropy machine minimizes the worst-case 0-1 loss over the structured set of distribution, and by our numerical experiments can outperform other well-known linear classifiers such as SVM. We also prove a bound on the generalization worst-case error in the minimax approach.
We derive an inequality for the linear entropy, that gives sharp bounds for all finite dimensional systems. The derivation is based on generalised Bloch decompositions and provides a strict improvement for the possible distribution of purities for all finite dimensional quantum states. It thus extends the widely used concept of entropy inequalities from the asymptotic to the finite regime, and should also find applications in entanglement detection and efficient experimental characterisations of quantum states.
We consider state redistribution of a hybrid information source that has both classical and quantum components. The sender transmits classical and quantum information at the same time to the receiver, in the presence of classical and quantum side information both at the sender and at the decoder. The available resources are shared entanglement, and noiseless classical and quantum communication channels. We derive one-shot direct and converse bounds for these three resources, represented in terms of the smooth conditional entropies of the source state. Various coding theorems for two-party source coding problems are systematically obtained by reduction from our results, including the ones that have not been addressed in previous literatures.
We study Bell scenarios with binary outcomes supplemented by one bit of classical communication. We develop a method to find facet inequalities for such scenarios even when direct facet enumeration is not possible, or at least difficult. Using this method, we partially solve the scenario where Alice and Bob choose between three inputs, finding a total of 668 inequivalent facet inequalities (with respect to relabelings of inputs and outputs). We also show that some of these inequalities are constructed from the facet inequalities found in scenarios without communication, the well known Bell inequalities.