ترغب بنشر مسار تعليمي؟ اضغط هنا

Improved bounds on Fourier entropy and Min-entropy

212   0   0.0 ( 0 )
 نشر من قبل Nitin Saurabh
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Given a Boolean function $f:{-1,1}^nto {-1,1}$, the Fourier distribution assigns probability $widehat{f}(S)^2$ to $Ssubseteq [n]$. The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai asks if there exist a universal constant C>0 such that $H(hat{f}^2)leq C Inf(f)$, where $H(hat{f}^2)$ is the Shannon entropy of the Fourier distribution of $f$ and $Inf(f)$ is the total influence of $f$. 1) We consider the weaker Fourier Min-entropy-Influence (FMEI) conjecture. This asks if $H_{infty}(hat{f}^2)leq C Inf(f)$, where $H_{infty}(hat{f}^2)$ is the min-entropy of the Fourier distribution. We show $H_{infty}(hat{f}^2)leq 2C_{min}^oplus(f)$, where $C_{min}^oplus(f)$ is the minimum parity certificate complexity of $f$. We also show that for every $epsilongeq 0$, we have $H_{infty}(hat{f}^2)leq 2log (|hat{f}|_{1,epsilon}/(1-epsilon))$, where $|hat{f}|_{1,epsilon}$ is the approximate spectral norm of $f$. As a corollary, we verify the FMEI conjecture for the class of read-$k$ $DNF$s (for constant $k$). 2) We show that $H(hat{f}^2)leq 2 aUC^oplus(f)$, where $aUC^oplus(f)$ is the average unambiguous parity certificate complexity of $f$. This improves upon Chakraborty et al. An important consequence of the FEI conjecture is the long-standing Mansours conjecture. We show that a weaker version of FEI already implies Mansours conjecture: is $H(hat{f}^2)leq C min{C^0(f),C^1(f)}$?, where $C^0(f), C^1(f)$ are the 0- and 1-certificate complexities of $f$, respectively. 3) We study what FEI implies about the structure of polynomials that 1/3-approximate a Boolean function. We pose a conjecture (which is implied by FEI): no flat degree-$d$ polynomial of sparsity $2^{omega(d)}$ can 1/3-approximate a Boolean function. We prove this conjecture unconditionally for a particular class of polynomials.



قيم البحث

اقرأ أيضاً

The Fourier-Entropy Influence (FEI) Conjecture states that for any Boolean function $f:{+1,-1}^n to {+1,-1}$, the Fourier entropy of $f$ is at most its influence up to a universal constant factor. While the FEI conjecture has been proved for many cla sses of Boolean functions, it is still not known whether it holds for the class of Linear Threshold Functions. A natural question is: Does the FEI conjecture hold for a `random linear threshold function? In this paper, we answer this question in the affirmative. We consider two natural distributions on the weights defining a linear threshold function, namely uniform distribution on $[-1,1]$ and Normal distribution.
In 1992 Mansour proved that every size-$s$ DNF formula is Fourier-concentrated on $s^{O(loglog s)}$ coefficients. We improve this to $s^{O(loglog k)}$ where $k$ is the read number of the DNF. Since $k$ is always at most $s$, our bound matches Mansour s for all DNFs and strengthens it for small-read ones. The previous best bound for read-$k$ DNFs was $s^{O(k^{3/2})}$. For $k$ up to $tilde{Theta}(loglog s)$, we further improve our bound to the optimal $mathrm{poly}(s)$; previously no such bound was known for any $k = omega_s(1)$. Our techniques involve new connections between the term structure of a DNF, viewed as a set system, and its Fourier spectrum.
Feature selection, in the context of machine learning, is the process of separating the highly predictive feature from those that might be irrelevant or redundant. Information theory has been recognized as a useful concept for this task, as the predi ction power stems from the correlation, i.e., the mutual information, between features and labels. Many algorithms for feature selection in the literature have adopted the Shannon-entropy-based mutual information. In this paper, we explore the possibility of using Renyi min-entropy instead. In particular, we propose an algorithm based on a notion of conditional Renyi min-entropy that has been recently adopted in the field of security and privacy, and which is strictly related to the Bayes error. We prove that in general the two approaches are incomparable, in the sense that we show that we can construct datasets on which the Renyi-based algorithm performs better than the corresponding Shannon-based one, and datasets on which the situation is reversed. In practice, however, when considering datasets of real data, it seems that the Renyi-based algorithm tends to outperform the other one. We have effectuate several experiments on the BASEHOCK, SEMEION, and GISETTE datasets, and in all of them we have indeed observed that the Renyi-based algorithm gives better results.
We present generalized methods for calculating lower bounds on the ground-state entropy per site, $S_0$, or equivalently, the ground-state degeneracy per site, $W=e^{S_0/k_B}$, of the antiferromagnetic Potts model. We use these methods to derive improved lower bounds on $W$ for several lattices.
We show that the conditional min-entropy Hmin(A|B) of a bipartite state rho_AB is directly related to the maximum achievable overlap with a maximally entangled state if only local actions on the B-part of rho_AB are allowed. In the special case where A is classical, this overlap corresponds to the probability of guessing A given B. In a similar vein, we connect the conditional max-entropy Hmax(A|B) to the maximum fidelity of rho_AB with a product state that is completely mixed on A. In the case where A is classical, this corresponds to the security of A when used as a secret key in the presence of an adversary holding B. Because min- and max-entropies are known to characterize information-processing tasks such as randomness extraction and state merging, our results establish a direct connection between these tasks and basic operational problems. For example, they imply that the (logarithm of the) probability of guessing A given B is a lower bound on the number of uniform secret bits that can be extracted from A relative to an adversary holding B.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا