Do you want to publish a course? Click here

$f$-divergence Inequalities

123   0   0.0 ( 0 )
 Added by Igal Sason
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

This paper develops systematic approaches to obtain $f$-divergence inequalities, dealing with pairs of probability measures defined on arbitrary alphabets. Functional domination is one such approach, where special emphasis is placed on finding the best possible constant upper bounding a ratio of $f$-divergences. Another approach used for the derivation of bounds among $f$-divergences relies on moment inequalities and the logarithmic-convexity property, which results in tight bounds on the relative entropy and Bhattacharyya distance in terms of $chi^2$ divergences. A rich variety of bounds are shown to hold under boundedness assumptions on the relative information. Special attention is devoted to the total variation distance and its relation to the relative information and relative entropy, including reverse Pinsker inequalities, as well as on the $E_gamma$ divergence, which generalizes the total variation distance. Pinskers inequality is extended for this type of $f$-divergence, a result which leads to an inequality linking the relative entropy and relative information spectrum. Integral expressions of the Renyi divergence in terms of the relative information spectrum are derived, leading to bounds on the Renyi divergence in terms of either the variational distance or relative entropy.



rate research

Read More

166 - Igal Sason 2018
This paper is focused on $f$-divergences, consisting of three main contributions. The first one introduces integral representations of a general $f$-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of $f$-divergence inequalities, and it exemplifies their utility in the setup of Bayesian binary hypothesis testing. The last part of this paper further studies the local behavior of $f$-divergences.
52 - Igal Sason , Sergio Verdu 2016
This paper considers derivation of $f$-divergence inequalities via the approach of functional domination. Bounds on an $f$-divergence based on one or several other $f$-divergences are introduced, dealing with pairs of probability measures defined on arbitrary alphabets. In addition, a variety of bounds are shown to hold under boundedness assumptions on the relative information. The journal paper, which includes more approaches for the derivation of f-divergence inequalities and proofs, is available on the arXiv at https://arxiv.org/abs/1508.00335, and it has been published in the IEEE Trans. on Information Theory, vol. 62, no. 11, pp. 5973-6006, November 2016.
259 - Igal Sason 2019
This paper is focused on derivations of data-processing and majorization inequalities for $f$-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced without proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. Almost all the analysis is relegated to the appendices, which form a major part of this manuscript.
141 - Igal Sason 2015
New upper bounds on the relative entropy are derived as a function of the total variation distance. One bound refines an inequality by Verd{u} for general probability measures. A second bound improves the tightness of an inequality by Csisz{a}r and Talata for arbitrary probability measures that are defined on a common finite set. The latter result is further extended, for probability measures on a finite set, leading to an upper bound on the R{e}nyi divergence of an arbitrary non-negative order (including $infty$) as a function of the total variation distance. Another lower bound by Verd{u} on the total variation distance, expressed in terms of the distribution of the relative information, is tightened and it is attained under some conditions. The effect of these improvements is exemplified.
77 - Eshed Ram , Igal Sason 2016
This paper gives improved R{e}nyi entropy power inequalities (R-EPIs). Consider a sum $S_n = sum_{k=1}^n X_k$ of $n$ independent continuous random vectors taking values on $mathbb{R}^d$, and let $alpha in [1, infty]$. An R-EPI provides a lower bound on the order-$alpha$ Renyi entropy power of $S_n$ that, up to a multiplicative constant (which may depend in general on $n, alpha, d$), is equal to the sum of the order-$alpha$ Renyi entropy powers of the $n$ random vectors ${X_k}_{k=1}^n$. For $alpha=1$, the R-EPI coincides with the well-known entropy power inequality by Shannon. The first improved R-EPI is obtained by tightening the recent R-EPI by Bobkov and Chistyakov which relies on the sharpened Youngs inequality. A further improvement of the R-EPI also relies on convex optimization and results on rank-one modification of a real-valued diagonal matrix.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا