ترغب بنشر مسار تعليمي؟ اضغط هنا

Bernoulli sums and Renyi entropy inequalities

202   0   0.0 ( 0 )
 نشر من قبل James Melbourne
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the Renyi entropy of independent sums of integer valued random variables through Fourier theoretic means, and give sharp comparisons between the variance and the Renyi entropy, for Poisson-Bernoulli variables. As applications we prove that a discrete ``min-entropy power is super additive on independent variables up to a universal constant, and give new bounds on an entropic generalization of the Littlewood-Offord problem that are sharp in the ``Poisson regime.



قيم البحث

اقرأ أيضاً

We establish a discrete analog of the Renyi entropy comparison due to Bobkov and Madiman. For log-concave variables on the integers, the min entropy is within log e of the usual Shannon entropy. Additionally we investigate the entropic Rogers-Shephar d inequality studied by Madiman and Kontoyannis, and establish a sharp Renyi version for certain parameters in both the continuous and discrete cases
77 - Eshed Ram , Igal Sason 2016
This paper gives improved R{e}nyi entropy power inequalities (R-EPIs). Consider a sum $S_n = sum_{k=1}^n X_k$ of $n$ independent continuous random vectors taking values on $mathbb{R}^d$, and let $alpha in [1, infty]$. An R-EPI provides a lower bound on the order-$alpha$ Renyi entropy power of $S_n$ that, up to a multiplicative constant (which may depend in general on $n, alpha, d$), is equal to the sum of the order-$alpha$ Renyi entropy powers of the $n$ random vectors ${X_k}_{k=1}^n$. For $alpha=1$, the R-EPI coincides with the well-known entropy power inequality by Shannon. The first improved R-EPI is obtained by tightening the recent R-EPI by Bobkov and Chistyakov which relies on the sharpened Youngs inequality. A further improvement of the R-EPI also relies on convex optimization and results on rank-one modification of a real-valued diagonal matrix.
We investigate the role of convexity in Renyi entropy power inequalities. After proving that a general Renyi entropy power inequality in the style of Bobkov-Chistyakov (2015) fails when the Renyi parameter $rin(0,1)$, we show that random vectors with $s$-concave densities do satisfy such a Renyi entropy power inequality. Along the way, we establish the convergence in the Central Limit Theorem for Renyi entropies of order $rin(0,1)$ for log-concave densities and for compactly supported, spherically symmetric and unimodal densities, complementing a celebrated result of Barron (1986). Additionally, we give an entropic characterization of the class of $s$-concave densities, which extends a classical result of Cover and Zhang (1994).
We prove that for Bernoulli percolation on $mathbb{Z}^d$, $dgeq 2$, the percolation density is an analytic function of the parameter in the supercritical interval. For this we introduce some techniques that have further implications. In particular, w e prove that the susceptibility is analytic in the subcritical interval for all transitive short- or long-range models, and that $p_c^{bond} <1/2$ for certain families of triangulations for which Benjamini & Schramm conjectured that $p_c^{site} leq 1/2$.
Feature selection, in the context of machine learning, is the process of separating the highly predictive feature from those that might be irrelevant or redundant. Information theory has been recognized as a useful concept for this task, as the predi ction power stems from the correlation, i.e., the mutual information, between features and labels. Many algorithms for feature selection in the literature have adopted the Shannon-entropy-based mutual information. In this paper, we explore the possibility of using Renyi min-entropy instead. In particular, we propose an algorithm based on a notion of conditional Renyi min-entropy that has been recently adopted in the field of security and privacy, and which is strictly related to the Bayes error. We prove that in general the two approaches are incomparable, in the sense that we show that we can construct datasets on which the Renyi-based algorithm performs better than the corresponding Shannon-based one, and datasets on which the situation is reversed. In practice, however, when considering datasets of real data, it seems that the Renyi-based algorithm tends to outperform the other one. We have effectuate several experiments on the BASEHOCK, SEMEION, and GISETTE datasets, and in all of them we have indeed observed that the Renyi-based algorithm gives better results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا