ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal lower bounds for universal relation, samplers, and finding duplicates

124   0   0.0 ( 0 )
 نشر من قبل Jelani Nelson
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the communication problem $mathbf{UR}$ (universal relation) [KRW95], Alice and Bob respectively receive $x$ and $y$ in ${0,1}^n$ with the promise that $x eq y$. The last player to receive a message must output an index $i$ such that $x_i eq y_i$. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly $Theta(min{n, log(1/delta)log^2(frac{n}{log(1/delta)})})$ bits for failure probability $delta$. Our lower bound holds even if promised $mathop{support}(y)subset mathop{support}(x)$. As a corollary, we obtain optimal lower bounds for $ell_p$-sampling in strict turnstile streams for $0le p < 2$, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if it is promised that $xin{0,1}^n$ at all points in the stream. Our lower bound demonstrates that any algorithm $mathcal{A}$ solving sampling problems in turnstile streams in low memory can be used to encode subsets of $[n]$ of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to $mathcal{A}$ throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoders interactions with $mathcal{A}$, which is loosely motivated by techniques in differential privacy. Our correctness analysis involves understanding the ability of $mathcal{A}$ to correctly answer adaptive queries which have positive but bounded mutual information with $mathcal{A}$s internal randomness, and may be of independent interest in the newly emerging area of adaptive data analysis with a theoretical computer science lens.



قيم البحث

اقرأ أيضاً

In the communication problem $mathbf{UR}$ (universal relation) [KRW95], Alice and Bob respectively receive $x, y in{0,1}^n$ with the promise that $x eq y$. The last player to receive a message must output an index $i$ such that $x_i eq y_i$. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly $Theta(min{n,log(1/delta)log^2(frac n{log(1/delta)})})$ for failure probability $delta$. Our lower bound holds even if promised $mathop{support}(y)subset mathop{support}(x)$. As a corollary, we obtain optimal lower bounds for $ell_p$-sampling in strict turnstile streams for $0le p < 2$, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if promised $xin{0,1}^n$ at all points in the stream. We give two different proofs of our main result. The first proof demonstrates that any algorithm $mathcal A$ solving sampling problems in turnstile streams in low memory can be used to encode subsets of $[n]$ of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to $mathcal A$ throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoders interactions with $mathcal A$, which is loosely motivated by techniques in differential privacy. Our second proof is via a novel randomized reduction from Augmented Indexing [MNSW98] which needs to interact with $mathcal A$ adaptively. To handle the adaptivity we identify certain likely interaction patterns and union bound over them to guarantee correct interaction on all of them. To guarantee correctness, it is important that the interaction hides some of its randomness from $mathcal A$ in the reduction.
Several works have shown unconditional hardness (via integrality gaps) of computing equilibria using strong hierarchies of convex relaxations. Such results however only apply to the problem of computing equilibria that optimize a certain objective fu nction and not to the (arguably more fundamental) task of finding emph{any} equilibrium. We present an algorithmic model based on the sum-of-squares (SoS) hierarchy that allows escaping this inherent limitation of integrality gaps. In this model, algorithms access the input game only through a relaxed solution to the natural SoS relaxation for computing equilibria. They can then adaptively construct a list of candidate solutions and invoke a verification oracle to check if any candidate on the list is a solution. This model captures most well-studied approximation algorithms such as those for Max-Cut, Sparsest Cut, and Unique-Games. The state-of-the-art algorithms for computing exact and approximate equilibria in two-player, n-strategy games are captured in this model and require that at least one of i) size (~ running time) of the SoS relaxation or ii) the size of the list of candidates, be at least $2^{Omega(n)}$ and $n^{Omega(log{(n)})}$ respectively. Our main result shows a lower bound that matches these upper bound up to constant factors in the exponent. This can be interpreted as an unconditional confirmation, in our restricted algorithmic framework, of Rubinsteins recent conditional hardness cite{Rub} for computing approximate equilibria. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of one game is far from every (approximate) equilibrium of any other game in the family.
181 - Joel Friedman 2017
We develop a notion of {em inner rank} as a tool for obtaining lower bounds on the rank of matrix multiplication tensors. We use it to give a short proof that the border rank (and therefore rank) of the tensor associated with $ntimes n$ matrix multip lication over an arbitrary field is at least $2n^2-n+1$. While inner rank does not provide improvements to currently known lower bounds, we argue that this notion merits further study.
We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on $n$ variables taking values in ${0,ldots,q-1}$, we prove that improving over the trivial approximability by a factor of $q$ requires $Omega(n)$ space even on instances with $O(n)$ constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires $Omega(n)$ space. The key technical core is an optimal, $q^{-(k-1)}$-inapproximability for the case where every constraint is given by a system of $k-1$ linear equations $bmod; q$ over $k$ variables. Prior to our work, no such hardness was known for an approximation factor less than $1/2$ for any CSP. Our work builds on and extends the work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the max cut in graphs. This corresponds roughly to the case of Max $k$-LIN-$bmod; q$ with $k=q=2$. Each one of the extensions provides non-trivial technical challenges that we overcome in this work.
224 - Raghu Meka , Avi Wigderson 2013
Finding cliques in random graphs and the closely related planted clique variant, where a clique of size t is planted in a random G(n,1/2) graph, have been the focus of substantial study in algorithm design. Despite much effort, the best known polynom ial-time algorithms only solve the problem for t = Theta(sqrt(n)). Here we show that beating sqrt(n) would require substantially new algorithmic ideas, by proving a lower bound for the problem in the sum-of-squares (or Lasserre) hierarchy, the most powerful class of semi-definite programming algorithms we know of: r rounds of the sum-of-squares hierarchy can only solve the planted clique for t > sqrt(n)/(C log n)^(r^2). Previously, no nontrivial lower bounds were known. Our proof is formulated as a degree lower bound in the Positivstellensatz algebraic proof system, which is equivalent to the sum-of-squares hierarchy. The heart of our (average-case) lower bound is a proof that a certain random matrix derived from the input graph is (with high probability) positive semidefinite. Two ingredients play an important role in this proof. The first is the classical theory of association schemes, applied to the average and variance of that random matrix. The second is a new large deviation inequality for matrix-valued polynomials. Our new tail estimate seems to be of independent interest and may find other applications, as it generalizes both the estimates on real-valued polynomials and on sums of independent random matrices.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا