ترغب بنشر مسار تعليمي؟ اضغط هنا

Tight Bounds for Symmetric Divergence Measures and a New Inequality Relating $f$-Divergences

104   0   0.0 ( 0 )
 نشر من قبل Igal Sason
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Igal Sason




اسأل ChatGPT حول البحث

Tight bounds for several symmetric divergence measures are introduced, given in terms of the total variation distance. Each of these bounds is attained by a pair of 2 or 3-element probability distributions. An application of these bounds for lossless source coding is provided, refining and improving a certain bound by Csiszar. A new inequality relating $f$-divergences is derived, and its use is exemplified. The last section of this conference paper is not included in the recent journal paper that was published in the February 2015 issue of the IEEE Trans. on Information Theory (see arXiv:1403.7164), as well as some new paragraphs throughout the paper which are linked to new references.



قيم البحث

اقرأ أيضاً

122 - Igal Sason , Sergio Verdu 2015
This paper develops systematic approaches to obtain $f$-divergence inequalities, dealing with pairs of probability measures defined on arbitrary alphabets. Functional domination is one such approach, where special emphasis is placed on finding the be st possible constant upper bounding a ratio of $f$-divergences. Another approach used for the derivation of bounds among $f$-divergences relies on moment inequalities and the logarithmic-convexity property, which results in tight bounds on the relative entropy and Bhattacharyya distance in terms of $chi^2$ divergences. A rich variety of bounds are shown to hold under boundedness assumptions on the relative information. Special attention is devoted to the total variation distance and its relation to the relative information and relative entropy, including reverse Pinsker inequalities, as well as on the $E_gamma$ divergence, which generalizes the total variation distance. Pinskers inequality is extended for this type of $f$-divergence, a result which leads to an inequality linking the relative entropy and relative information spectrum. Integral expressions of the Renyi divergence in terms of the relative information spectrum are derived, leading to bounds on the Renyi divergence in terms of either the variational distance or relative entropy.
259 - Igal Sason 2019
This paper is focused on derivations of data-processing and majorization inequalities for $f$-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced witho ut proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. Almost all the analysis is relegated to the appendices, which form a major part of this manuscript.
166 - Igal Sason 2018
This paper is focused on $f$-divergences, consisting of three main contributions. The first one introduces integral representations of a general $f$-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of $f$-divergence inequalities, and it exemplifies their utility in the setup of Bayesian binary hypothesis testing. The last part of this paper further studies the local behavior of $f$-divergences.
We derive a lower bound on the smallest output entropy that can be achieved via vector quantization of a $d$-dimensional source with given expected $r$th-power distortion. Specialized to the one-dimensional case, and in the limit of vanishing distort ion, this lower bound converges to the output entropy achieved by a uniform quantizer, thereby recovering the result by Gish and Pierce that uniform quantizers are asymptotically optimal as the allowed distortion tends to zero. Our lower bound holds for all $d$-dimensional memoryless sources having finite differential entropy and whose integer part has finite entropy. In contrast to Gish and Pierce, we do not require any additional constraints on the continuity or decay of the source probability density function. For one-dimensional sources, the derivation of the lower bound reveals a necessary condition for a sequence of quantizers to be asymptotically optimal as the allowed distortion tends to zero. This condition implies that any sequence of asymptotically-optimal almost-regular quantizers must converge to a uniform quantizer as the allowed distortion tends to zero.
95 - Jeremiah Birrell 2020
Distributionally robust optimization (DRO) is a widely used framework for optimizing objective functionals in the presence of both randomness and model-form uncertainty. A key step in the practical solution of many DRO problems is a tractable reformu lation of the optimization over the chosen model ambiguity set, which is generally infinite dimensional. Previous works have solved this problem in the case where the objective functional is an expected value. In this paper we study objective functionals that are the sum of an expected value and a variance penalty term. We prove that the corresponding variance-penalized DRO problem over an $f$-divergence neighborhood can be reformulated as a finite-dimensional convex optimization problem. This result also provides tight uncertainty quantification bounds on the variance.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا