This note contributes to the understanding of generalized entropy power inequalities. Our main goal is to construct a counter-example regarding monotonicity and entropy comparison of weighted sums of independent identically distributed log-concave random variables. We also present a complex analogue of a recent dependent entropy power inequality of Hao and Jog, and give a very simple proof.
This paper gives improved R{e}nyi entropy power inequalities (R-EPIs). Consider a sum $S_n = sum_{k=1}^n X_k$ of $n$ independent continuous random vectors taking values on $mathbb{R}^d$, and let $alpha in [1, infty]$. An R-EPI provides a lower bound on the order-$alpha$ Renyi entropy power of $S_n$ that, up to a multiplicative constant (which may depend in general on $n, alpha, d$), is equal to the sum of the order-$alpha$ Renyi entropy powers of the $n$ random vectors ${X_k}_{k=1}^n$. For $alpha=1$, the R-EPI coincides with the well-known entropy power inequality by Shannon. The first improved R-EPI is obtained by tightening the recent R-EPI by Bobkov and Chistyakov which relies on the sharpened Youngs inequality. A further improvement of the R-EPI also relies on convex optimization and results on rank-one modification of a real-valued diagonal matrix.
Using a sharp version of the reverse Young inequality, and a Renyi entropy comparison result due to Fradelizi, Madiman, and Wang, the authors are able to derive Renyi entropy power inequalities for log-concave random vectors when Renyi parameters belong to $(0,1)$. Furthermore, the estimates are shown to be sharp up to absolute constants.
An extension of the entropy power inequality to the form $N_r^alpha(X+Y) geq N_r^alpha(X) + N_r^alpha(Y)$ with arbitrary independent summands $X$ and $Y$ in $mathbb{R}^n$ is obtained for the Renyi entropy and powers $alpha geq (r+1)/2$.
New upper bounds on the relative entropy are derived as a function of the total variation distance. One bound refines an inequality by Verd{u} for general probability measures. A second bound improves the tightness of an inequality by Csisz{a}r and Talata for arbitrary probability measures that are defined on a common finite set. The latter result is further extended, for probability measures on a finite set, leading to an upper bound on the R{e}nyi divergence of an arbitrary non-negative order (including $infty$) as a function of the total variation distance. Another lower bound by Verd{u} on the total variation distance, expressed in terms of the distribution of the relative information, is tightened and it is attained under some conditions. The effect of these improvements is exemplified.
This paper is focused on $f$-divergences, consisting of three main contributions. The first one introduces integral representations of a general $f$-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of $f$-divergence inequalities, and it exemplifies their utility in the setup of Bayesian binary hypothesis testing. The last part of this paper further studies the local behavior of $f$-divergences.