ترغب بنشر مسار تعليمي؟ اضغط هنا

Tighter Relations Between Sensitivity and Other Complexity Measures

168   0   0.0 ( 0 )
 نشر من قبل Mohammad Bavarian
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Sensitivity conjecture is a longstanding and fundamental open problem in the area of complexity measures of Boolean functions and decision tree complexity. The conjecture postulates that the maximum sensitivity of a Boolean function is polynomially related to other major complexity measures. Despite much attention to the problem and major advances in analysis of Boolean functions in the past decade, the problem remains wide open with no positive result toward the conjecture since the work of Kenyon and Kutin from 2004. In this work, we present new upper bounds for various complexity measures in terms of sensitivity improving the bounds provided by Kenyon and Kutin. Specifically, we show that deg(f)^{1-o(1)}=O(2^{s(f)}) and C(f) < 2^{s(f)-1} s(f); these in turn imply various corollaries regarding the relation between sensitivity and other complexity measures, such as block sensitivity, via known results. The gap between sensitivity and other complexity measures remains exponential but these results are the first improvement for this difficult problem that has been achieved in a decade.



قيم البحث

اقرأ أيضاً

A noticeable fraction of Algorithms papers in the last few decades improve the running time of well-known algorithms for fundamental problems by logarithmic factors. For example, the $O(n^2)$ dynamic programming solution to the Longest Common Subsequ ence problem (LCS) was improved to $O(n^2/log^2 n)$ in several ways and using a variety of ingenious tricks. This line of research, also known as the art of shaving log factors, lacks a tool for proving negative results. Specifically, how can we show that it is unlikely that LCS can be solved in time $O(n^2/log^3 n)$? Perhaps the only approach for such results was suggested in a recent paper of Abboud, Hansen, Vassilevska W. and Williams (STOC16). The authors blame the hardness of shaving logs on the hardness of solving satisfiability on Boolean formulas (Formula-SAT) faster than exhaustive search. They show that an $O(n^2/log^{1000} n)$ algorithm for LCS would imply a major advance in circuit lower bounds. Whether this approach can lead to tighter barriers was unclear. In this paper, we push this approach to its limit and, in particular, prove that a well-known barrier from complexity theory stands in the way for shaving five additional log factors for fundamental combinatorial problems. For LCS, regular expression pattern matching, as well as the Frechet distance problem from Computational Geometry, we show that an $O(n^2/log^{7+varepsilon} n)$ runtime would imply new Formula-SAT algorithms. Our main result is a reduction from SAT on formulas of size $s$ over $n$ variables to LCS on sequences of length $N=2^{n/2} cdot s^{1+o(1)}$. Our reduction is essentially as efficient as possible, and it greatly improves the previously known reduction for LCS with $N=2^{n/2} cdot s^c$, for some $c geq 100$.
We extend the definitions of complexity measures of functions to domains such as the symmetric group. The complexity measures we consider include degree, approximate degree, decision tree complexity, sensitivity, block sensitivity, and a few others. We show that these complexity measures are polynomially related for the symmetric group and for many other domains. To show that all measures but sensitivity are polynomially related, we generalize classical arguments of Nisan and others. To add sensitivity to the mix, we reduce to Huangs sensitivity theorem using pseudo-characters, which witness the degree of a function. Using similar ideas, we extend the characterization of Boolean degree 1 functions on the symmetric group due to Ellis, Friedgut and Pilpel to the perfect matching scheme. As another application of our ideas, we simplify the characterization of maximum-size $t$-intersecting families in the symmetric group and the perfect matching scheme.
Sensitivity cite{CD82,CDR86} and block sensitivity cite{Nisan91} are two important complexity measures of Boolean functions. A longstanding open problem in decision tree complexity, the Sensitivity versus Block Sensitivity question, proposed by Nisan and Szegedy cite{Nisan94} in 1992, is whether these two complexity measures are polynomially related, i.e., whether $bs(f)=O(s(f)^{O(1)})$. We prove an new upper bound on block sensitivity in terms of sensitivity: $bs(f) leq 2^{s(f)-1} s(f)$. Previously, the best upper bound on block sensitivity was $bs(f) leq (frac{e}{sqrt{2pi}}) e^{s(f)} sqrt{s(f)}$ by Kenyon and Kutin cite{KK}. We also prove that if $min{s_0(f),s_1(f)}$ is a constant, then sensitivity and block sensitivity are linearly related, i.e. $bs(f)=O(s(f))$.
Let $f:{0,1}^n rightarrow {0,1}$ be a Boolean function. The certificate complexity $C(f)$ is a complexity measure that is quadratically tight for the zero-error randomized query complexity $R_0(f)$: $C(f) leq R_0(f) leq C(f)^2$. In this paper we stud y a new complexity measure that we call expectational certificate complexity $EC(f)$, which is also a quadratically tight bound on $R_0(f)$: $EC(f) leq R_0(f) = O(EC(f)^2)$. We prove that $EC(f) leq C(f) leq EC(f)^2$ and show that there is a quadratic separation between the two, thus $EC(f)$ gives a tighter upper bound for $R_0(f)$. The measure is also related to the fractional certificate complexity $FC(f)$ as follows: $FC(f) leq EC(f) = O(FC(f)^{3/2})$. This also connects to an open question by Aaronson whether $FC(f)$ is a quadratically tight bound for $R_0(f)$, as $EC(f)$ is in fact a relaxation of $FC(f)$. In the second part of the work, we upper bound the distributed query complexity $D^mu_epsilon(f)$ for product distributions $mu$ by the square of the query corruption bound ($mathrm{corr}_epsilon(f)$) which improves upon a result of Harsha, Jain and Radhakrishnan [2015]. A similar statement for communication complexity is open.
Block sensitivity ($bs(f)$), certificate complexity ($C(f)$) and fractional certificate complexity ($C^*(f)$) are three fundamental combinatorial measures of complexity of a boolean function $f$. It has long been known that $bs(f) leq C^{ast}(f) leq C(f) =O(bs(f)^2)$. We provide an infinite family of examples for which $C(f)$ grows quadratically in $C^{ast}(f)$ (and also $bs(f)$) giving optimal separations between these measures. Previously the biggest separation known was $C(f)=C^{ast}(f)^{log_{4.5}5}$. We also give a family of examples for which $C^{ast}(f)=Omega(bs(f)^{3/2})$. These examples are obtained by composing boolean functions in various ways. Here the composition $f circ g$ of $f$ with $g$ is obtained by substituting for each variable of $f$ a copy of $g$ on disjoint sets of variables. To construct and analyse these examples we systematically investigate the behaviour under function composition of these measures and also the sensitivity measure $s(f)$. The measures $s(f)$, $C(f)$ and $C^{ast}(f)$ behave nicely under composition: they are submultiplicative (where measure $m$ is submultiplicative if $m(f circ g) leq m(f)m(g)$) with equality holding under some fairly general conditions. The measure $bs(f)$ is qualitatively different: it is not submultiplicative. This qualitative difference was not noticed in the previous literature and we correct some errors that appeared in previous papers. We define the composition limit of a measure $m$ at function $f$, $m^{lim}(f)$ to be the limit as $k$ grows of $m(f^{(k)})^{1/k}$, where $f^{(k)}$ is the iterated composition of $f$ with itself $k$-times. For any function $f$ we show that $bs^{lim}(f) = (C^*)^{lim}(f)$ and characterize $s^{lim}(f), (C^*)^{lim}(f)$, and $C^{lim}(f)$ in terms of the largest eigenvalue of a certain set of $2times 2$ matrices associated with $f$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا