ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum $f$-divergences in von Neumann algebras II. Maximal $f$-divergences

275   0   0.0 ( 0 )
 نشر من قبل Fumio Hiai
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Fumio Hiai




اسأل ChatGPT حول البحث

As a continuation of the paper [20] on standard $f$-divergences, we make a systematic study of maximal $f$-divergences in general von Neumann algebras. For maximal $f$-divergences, apart from their definition based on Haagerups $L^1$-space, we present the general integral expression and the variational expression in terms of reverse tests. From these definition and expressions we prove important properties of maximal $f$-divergences, for instance, the monotonicity inequality, the joint convexity, the lower semicontinuity, and the martingale convergence. The inequality between the standard and the maximal $f$-divergences is also given.



قيم البحث

اقرأ أيضاً

163 - Fumio Hiai 2018
We make a systematic study of standard $f$-divergences in general von Neumann algebras. An important ingredient of our study is to extend Kosakis variational expression of the relative entropy to an arbitary standard $f$-divergence, from which most o f the important properties of standard $f$-divergences follow immediately. In a similar manner we give a comprehensive exposition on the Renyi divergence in von Neumann algebra. Some results on relative hamiltonians formerly studied by Araki and Donald are improved as a by-product.
140 - F. Hiai , M. Mosonyi , D. Petz 2010
Quantum f-divergences are a quantum generalization of the classical notion of f-divergences, and are a special case of Petz quasi-entropies. Many well known distinguishability measures of quantum states are given by, or derived from, f-divergences; s pecial examples include the quantum relative entropy, the Renyi relative entropies, and the Chernoff and Hoeffding measures. Here we show that the quantum f-divergences are monotonic under the dual of Schwarz maps whenever the defining function is operator convex. This extends and unifies all previously known monotonicity results. We also analyze the case where the monotonicity inequality holds with equality, and extend Petz reversibility theorem for a large class of f-divergences and other distinguishability measures. We apply our findings to the problem of quantum error correction, and show that if a stochastic map preserves the pairwise distinguishability on a set of states, as measured by a suitable f-divergence, then its action can be reversed on that set by another stochastic map that can be constructed from the original one in a canonical way. We also provide an integral representation for operator convex functions on the positive half-line, which is the main ingredient in extending previously known results on the monotonicity inequality and the case of equality. We also consider some special cases where the convexity of f is sufficient for the monotonicity, and obtain the inverse Holder inequality for operators as an application. The presentation is completely self-contained and requires only standard knowledge of matrix analysis.
124 - Yan Pautrat , Simeng Wang 2020
A lemma stated by Ke Li in [arXiv:1208.1400] has been used in e.g. [arXiv:1510.04682,arXiv:1706.04590,arXiv:1612.01464,arXiv:1308.6503,arXiv:1602.08898] for various tasks in quantum hypothesis testing, data compression with quantum side information o r quantum key distribution. This lemma was originally proven in finite dimension, with a direct extension to type I von Neumann algebras. Here we show that the use of modular theory allows to give more transparent meaning to the objects constructed by the lemma, and to prove it for general von Neumann algebras. This yields immediate generalizations of e.g. [arXiv:1510.04682].
94 - Mark M. Wilde 2021
The quantum relative entropy is a measure of the distinguishability of two quantum states, and it is a unifying concept in quantum information theory: many information measures such as entropy, conditional entropy, mutual information, and entanglemen t measures can be realized from it. As such, there has been broad interest in generalizing the notion to further understand its most basic properties, one of which is the data processing inequality. The quantum f-divergence of Petz is one generalization of the quantum relative entropy, and it also leads to other relative entropies, such as the Petz--Renyi relative entropies. In this contribution, I introduce the optimized quantum f-divergence as a related generalization of quantum relative entropy. I prove that it satisfies the data processing inequality, and the method of proof relies upon the operator Jensen inequality, similar to Petzs original approach. Interestingly, the sandwiched Renyi relative entropies are particular examples of the optimized f-divergence. Thus, one benefit of this approach is that there is now a single, unified approach for establishing the data processing inequality for both the Petz--Renyi and sandwiched Renyi relative entropies, for the full range of parameters for which it is known to hold.
We develop a rigorous and general framework for constructing information-theoretic divergences that subsume both $f$-divergences and integral probability metrics (IPMs), such as the $1$-Wasserstein distance. We prove under which assumptions these div ergences, hereafter referred to as $(f,Gamma)$-divergences, provide a notion of `distance between probability measures and show that they can be expressed as a two-stage mass-redistribution/mass-transport process. The $(f,Gamma)$-divergences inherit features from IPMs, such as the ability to compare distributions which are not absolutely continuous, as well as from $f$-divergences, namely the strict concavity of their variational representations and the ability to control heavy-tailed distributions for particular choices of $f$. When combined, these features establish a divergence with improved properties for estimation, statistical learning, and uncertainty quantification applications. Using statistical learning as an example, we demonstrate their advantage in training generative adversarial networks (GANs) for heavy-tailed, not-absolutely continuous sample distributions. We also show improved performance and stability over gradient-penalized Wasserstein GAN in image generation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا