ترغب بنشر مسار تعليمي؟ اضغط هنا

On Reverse Pinsker Inequalities

71   0   0.0 ( 0 )
 نشر من قبل Igal Sason
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Igal Sason




اسأل ChatGPT حول البحث

New upper bounds on the relative entropy are derived as a function of the total variation distance. One bound refines an inequality by Verd{u} for general probability measures. A second bound improves the tightness of an inequality by Csisz{a}r and Talata for arbitrary probability measures that are defined on a common finite set. The latter result is further extended, for probability measures on a finite set, leading to an upper bound on the R{e}nyi divergence of an arbitrary non-negative order (including $infty$) as a function of the total variation distance. Another lower bound by Verd{u} on the total variation distance, expressed in terms of the distribution of the relative information, is tightened and it is attained under some conditions. The effect of these improvements is exemplified.

قيم البحث

اقرأ أيضاً

77 - Eshed Ram , Igal Sason 2016
This paper gives improved R{e}nyi entropy power inequalities (R-EPIs). Consider a sum $S_n = sum_{k=1}^n X_k$ of $n$ independent continuous random vectors taking values on $mathbb{R}^d$, and let $alpha in [1, infty]$. An R-EPI provides a lower bound on the order-$alpha$ Renyi entropy power of $S_n$ that, up to a multiplicative constant (which may depend in general on $n, alpha, d$), is equal to the sum of the order-$alpha$ Renyi entropy powers of the $n$ random vectors ${X_k}_{k=1}^n$. For $alpha=1$, the R-EPI coincides with the well-known entropy power inequality by Shannon. The first improved R-EPI is obtained by tightening the recent R-EPI by Bobkov and Chistyakov which relies on the sharpened Youngs inequality. A further improvement of the R-EPI also relies on convex optimization and results on rank-one modification of a real-valued diagonal matrix.
In part I of this two-part work, certain minimization problems based on a parametric family of relative entropies (denoted $mathscr{I}_{alpha}$) were studied. Such minimizers were called forward $mathscr{I}_{alpha}$-projections. Here, a complementary class of minimization problems leading to the so-called reverse $mathscr{I}_{alpha}$-projections are studied. Reverse $mathscr{I}_{alpha}$-projections, particularly on log-convex or power-law families, are of interest in robust estimation problems ($alpha >1$) and in constrained compression settings ($alpha <1$). Orthogonality of the power-law family with an associated linear family is first established and is then exploited to turn a reverse $mathscr{I}_{alpha}$-projection into a forward $mathscr{I}_{alpha}$-projection. The transformed problem is a simpler quasiconvex minimization subject to linear constraints.
This note contributes to the understanding of generalized entropy power inequalities. Our main goal is to construct a counter-example regarding monotonicity and entropy comparison of weighted sums of independent identically distributed log-concave ra ndom variables. We also present a complex analogue of a recent dependent entropy power inequality of Hao and Jog, and give a very simple proof.
166 - Igal Sason 2018
This paper is focused on $f$-divergences, consisting of three main contributions. The first one introduces integral representations of a general $f$-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of $f$-divergence inequalities, and it exemplifies their utility in the setup of Bayesian binary hypothesis testing. The last part of this paper further studies the local behavior of $f$-divergences.
122 - Igal Sason , Sergio Verdu 2015
This paper develops systematic approaches to obtain $f$-divergence inequalities, dealing with pairs of probability measures defined on arbitrary alphabets. Functional domination is one such approach, where special emphasis is placed on finding the be st possible constant upper bounding a ratio of $f$-divergences. Another approach used for the derivation of bounds among $f$-divergences relies on moment inequalities and the logarithmic-convexity property, which results in tight bounds on the relative entropy and Bhattacharyya distance in terms of $chi^2$ divergences. A rich variety of bounds are shown to hold under boundedness assumptions on the relative information. Special attention is devoted to the total variation distance and its relation to the relative information and relative entropy, including reverse Pinsker inequalities, as well as on the $E_gamma$ divergence, which generalizes the total variation distance. Pinskers inequality is extended for this type of $f$-divergence, a result which leads to an inequality linking the relative entropy and relative information spectrum. Integral expressions of the Renyi divergence in terms of the relative information spectrum are derived, leading to bounds on the Renyi divergence in terms of either the variational distance or relative entropy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا