ترغب بنشر مسار تعليمي؟ اضغط هنا

The Value of Help Bits in Randomized and Average-Case Complexity

139   0   0.0 ( 0 )
 نشر من قبل Omid Etesami
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Help bits are some limited trusted information about an instance or instances of a computational problem that may reduce the computational complexity of solving that instance or instances. In this paper, we study the value of help bits in the settings of randomized and average-case complexity. Amir, Beigel, and Gasarch (1990) show that for constant $k$, if $k$ instances of a decision problem can be efficiently solved using less than $k$ bits of help, then the problem is in P/poly. We extend this result to the setting of randomized computation: We show that the decision problem is in P/poly if using $ell$ help bits, $k$ instances of the problem can be efficiently solved with probability greater than $2^{ell-k}$. The same result holds if using less than $k(1 - h(alpha))$ help bits (where $h(cdot)$ is the binary entropy function), we can efficiently solve $(1-alpha)$ fraction of the instances correctly with non-vanishing probability. We also extend these two results to non-constant but logarithmic $k$. In this case however, instead of showing that the problem is in P/poly we show that it satisfies $k$-membership comparability, a notion known to be related to solving $k$ instances using less than $k$ bits of help. Next we consider the setting of average-case complexity: Assume that we can solve $k$ instances of a decision problem using some help bits whose entropy is less than $k$ when the $k$ instances are drawn independently from a particular distribution. Then we can efficiently solve an instance drawn from that distribution with probability better than $1/2$. Finally, we show that in the case where $k$ is super-logarithmic, assuming $k$-membership comparability of a decision problem, one cannot prove that the problem is in P/poly by a black-box proof.



قيم البحث

اقرأ أيضاً

How many bits of information are revealed by a learning algorithm for a concept class of VC-dimension $d$? Previous works have shown that even for $d=1$ the amount of information may be unbounded (tend to $infty$ with the universe size). Can it be th at all concepts in the class require leaking a large amount of information? We show that typically concepts do not require leakage. There exists a proper learning algorithm that reveals $O(d)$ bits of information for most concepts in the class. This result is a special case of a more general phenomenon we explore. If there is a low information learner when the algorithm {em knows} the underlying distribution on inputs, then there is a learner that reveals little information on an average concept {em without knowing} the distribution on inputs.
Information-theoretic methods have proven to be a very powerful tool in communication complexity, in particular giving an elegant proof of the linear lower bound for the two-party disjointness function, and tight lower bounds on disjointness in the m ulti-party number-in-the-hand (NIH) model. In this paper, we study the applicability of information theoretic methods to the multi-party number-on-the-forehead model (NOF), where determining the complexity of disjointness remains an important open problem. There are two basic parts to the NIH disjointness lower bound: a direct sum theorem and a lower bound on the one-bit AND function using a beautiful connection between Hellinger distance and protocols revealed by Bar-Yossef, Jayram, Kumar and Sivakumar [BYJKS04]. Inspired by this connection, we introduce the notion of Hellinger volume. We show that it lower bounds the information cost of multi-party NOF protocols and provide a small toolbox that allows one to manipulate several Hellinger volume terms and lower bound a Hellinger volume when the distributions involved satisfy certain conditions. In doing so, we prove a new upper bound on the difference between the arithmetic mean and the geometric mean in terms of relative entropy. We then apply these new tools to obtain a lower bound on the informational complexity of the AND_k function in the NOF setting. Finally, we discuss the difficulties of proving a direct sum theorem for information cost in the NOF model.
Three decades of research in communication complexity have led to the invention of a number of techniques to lower bound randomized communication complexity. The majority of these techniques involve properties of large submatrices (rectangles) of the truth-table matrix defining a communication problem. The only technique that does not quite fit is information complexity, which has been investigated over the last decade. Here, we connect information complexity to one of the most powerful rectangular techniques: the recently-introduced smooth corruption (or smooth rectangle) bound. We show that the former subsumes the latter under rectangular input distributions. We conjecture that this subsumption holds more generally, under arbitrary distributions, which would resolve the long-standing direct sum question for randomized communication. As an application, we obtain an optimal $Omega(n)$ lower bound on the information complexity---under the {em uniform distribution}---of the so-called orthogonality problem (ORT), which is in turn closely related to the much-studied Gap-Hamming-Distance (GHD). The proof of this bound is along the lines of recent communication lower bounds for GHD, but we encounter a surprising amount of additional technical detail.
165 - Adi Shraibman 2017
We define nondeterministic communication complexity in the model of communication complexity with help of Babai, Hayes and Kimmel. We use it to prove logarithmic lower bounds on the NOF communication complexity of explicit graph functions, which are complementary to the bounds proved by Beame, David, Pitassi and Woelfel.
We study the composition question for bounded-error randomized query complexity: Is R(f o g) = Omega(R(f) R(g)) for all Boolean functions f and g? We show that inserting a simple Boolean function h, whose query complexity is only Theta(log R(g)), in between f and g allows us to prove R(f o h o g) = Omega(R(f) R(h) R(g)). We prove this using a new lower bound measure for randomized query complexity we call randomized sabotage complexity, RS(f). Randomized sabotage complexity has several desirable properties, such as a perfect composition theorem, RS(f o g) >= RS(f) RS(g), and a composition theorem with randomized query complexity, R(f o g) = Omega(R(f) RS(g)). It is also a quadratically tight lower bound for total functions and can be quadratically superior to the partition bound, the best known general lower bound for randomized query complexity. Using this technique we also show implications for lifting theorems in communication complexity. We show that a general lifting theorem for zero-error randomized protocols implies a general lifting theorem for bounded-error protocols.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا