Do you want to publish a course? Click here

Reversals of Renyi Entropy Inequalities under Log-Concavity

74   0   0.0 ( 0 )
 Added by James Melbourne
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We establish a discrete analog of the Renyi entropy comparison due to Bobkov and Madiman. For log-concave variables on the integers, the min entropy is within log e of the usual Shannon entropy. Additionally we investigate the entropic Rogers-Shephard inequality studied by Madiman and Kontoyannis, and establish a sharp Renyi version for certain parameters in both the continuous and discrete cases



rate research

Read More

We investigate the Renyi entropy of independent sums of integer valued random variables through Fourier theoretic means, and give sharp comparisons between the variance and the Renyi entropy, for Poisson-Bernoulli variables. As applications we prove that a discrete ``min-entropy power is super additive on independent variables up to a universal constant, and give new bounds on an entropic generalization of the Littlewood-Offord problem that are sharp in the ``Poisson regime.
77 - Eshed Ram , Igal Sason 2016
This paper gives improved R{e}nyi entropy power inequalities (R-EPIs). Consider a sum $S_n = sum_{k=1}^n X_k$ of $n$ independent continuous random vectors taking values on $mathbb{R}^d$, and let $alpha in [1, infty]$. An R-EPI provides a lower bound on the order-$alpha$ Renyi entropy power of $S_n$ that, up to a multiplicative constant (which may depend in general on $n, alpha, d$), is equal to the sum of the order-$alpha$ Renyi entropy powers of the $n$ random vectors ${X_k}_{k=1}^n$. For $alpha=1$, the R-EPI coincides with the well-known entropy power inequality by Shannon. The first improved R-EPI is obtained by tightening the recent R-EPI by Bobkov and Chistyakov which relies on the sharpened Youngs inequality. A further improvement of the R-EPI also relies on convex optimization and results on rank-one modification of a real-valued diagonal matrix.
Feature selection, in the context of machine learning, is the process of separating the highly predictive feature from those that might be irrelevant or redundant. Information theory has been recognized as a useful concept for this task, as the prediction power stems from the correlation, i.e., the mutual information, between features and labels. Many algorithms for feature selection in the literature have adopted the Shannon-entropy-based mutual information. In this paper, we explore the possibility of using Renyi min-entropy instead. In particular, we propose an algorithm based on a notion of conditional Renyi min-entropy that has been recently adopted in the field of security and privacy, and which is strictly related to the Bayes error. We prove that in general the two approaches are incomparable, in the sense that we show that we can construct datasets on which the Renyi-based algorithm performs better than the corresponding Shannon-based one, and datasets on which the situation is reversed. In practice, however, when considering datasets of real data, it seems that the Renyi-based algorithm tends to outperform the other one. We have effectuate several experiments on the BASEHOCK, SEMEION, and GISETTE datasets, and in all of them we have indeed observed that the Renyi-based algorithm gives better results.
Recently a new quantum generalization of the Renyi divergence and the corresponding conditional Renyi entropies was proposed. Here we report on a surprising relation between conditional Renyi entropies based on this new generalization and conditional Renyi entropies based on the quantum relative Renyi entropy that was used in previous literature. Our result generalizes the well-known duality relation H(A|B) + H(A|C) = 0 of the conditional von Neumann entropy for tripartite pure states to Renyi entropies of two different kinds. As a direct application, we prove a collection of inequalities that relate different conditional Renyi entropies and derive a new entropic uncertainty relation.
We show that the sequence of moments of order less than 1 of averages of i.i.d. positive random variables is log-concave. For moments of order at least 1, we conjecture that the sequence is log-convex and show that this holds eventually for integer moments (after neglecting the first $p^2$ terms of the sequence).
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا