Do you want to publish a course? Click here

Linear Space Streaming Lower Bounds for Approximating CSPs

159   0   0.0 ( 0 )
 Added by Alexander Golovnev
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on $n$ variables taking values in ${0,ldots,q-1}$, we prove that improving over the trivial approximability by a factor of $q$ requires $Omega(n)$ space even on instances with $O(n)$ constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires $Omega(n)$ space. The key technical core is an optimal, $q^{-(k-1)}$-inapproximability for the case where every constraint is given by a system of $k-1$ linear equations $bmod; q$ over $k$ variables. Prior to our work, no such hardness was known for an approximation factor less than $1/2$ for any CSP. Our work builds on and extends the work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the max cut in graphs. This corresponds roughly to the case of Max $k$-LIN-$bmod; q$ with $k=q=2$. Each one of the extensions provides non-trivial technical challenges that we overcome in this work.



rate research

Read More

A Boolean constraint satisfaction problem (CSP), Max-CSP$(f)$, is a maximization problem specified by a constraint $f:{-1,1}^kto{0,1}$. An instance of the problem consists of $m$ constraint applications on $n$ Boolean variables, where each constraint application applies the constraint to $k$ literals chosen from the $n$ variables and their negations. The goal is to compute the maximum number of constraints that can be satisfied by a Boolean assignment to the $n$~variables. In the $(gamma,beta)$-approximation version of the problem for parameters $gamma geq beta in [0,1]$, the goal is to distinguish instances where at least $gamma$ fraction of the constraints can be satisfied from instances where at most $beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of Max-CSP$(f)$ in the (dynamic) streaming setting, where constraints are inserted (and may also be deleted in the dynamic setting) one at a time. We completely characterize the approximability of all Boolean CSPs in the dynamic streaming setting. Specifically, given $f$, $gamma$ and $beta$ we show that either (1) the $(gamma,beta)$-approximation version of Max-CSP$(f)$ has a probabilistic dynamic streaming algorithm using $O(log n)$ space, or (2) for every $varepsilon > 0$ the $(gamma-varepsilon,beta+varepsilon)$-approximation version of Max-CSP$(f)$ requires $Omega(sqrt{n})$ space for probabilistic dynamic streaming algorithms. We also extend previously known results in the insertion-only setting to a wide variety of cases, and in particular the case of $k=2$ where we get a dichotomy and the case when the satisfying assignments of $f$ support a distribution on ${-1,1}^k$ with uniform marginals.
We demonstrate a lower bound technique for linear decision lists, which are decision lists where the queries are arbitrary linear threshold functions. We use this technique to prove an explicit lower bound by showing that any linear decision list computing the function $MAJ circ XOR$ requires size $2^{0.18 n}$. This completely answers an open question of Tur{a}n and Vatan [FoCM97]. We also show that the spectral classes $PL_1, PL_infty$, and the polynomial threshold function classes $widehat{PT}_1, PT_1$, are incomparable to linear decision lists.
181 - Joel Friedman 2017
We develop a notion of {em inner rank} as a tool for obtaining lower bounds on the rank of matrix multiplication tensors. We use it to give a short proof that the border rank (and therefore rank) of the tensor associated with $ntimes n$ matrix multiplication over an arbitrary field is at least $2n^2-n+1$. While inner rank does not provide improvements to currently known lower bounds, we argue that this notion merits further study.
In the communication problem $mathbf{UR}$ (universal relation) [KRW95], Alice and Bob respectively receive $x$ and $y$ in ${0,1}^n$ with the promise that $x eq y$. The last player to receive a message must output an index $i$ such that $x_i eq y_i$. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly $Theta(min{n, log(1/delta)log^2(frac{n}{log(1/delta)})})$ bits for failure probability $delta$. Our lower bound holds even if promised $mathop{support}(y)subset mathop{support}(x)$. As a corollary, we obtain optimal lower bounds for $ell_p$-sampling in strict turnstile streams for $0le p < 2$, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if it is promised that $xin{0,1}^n$ at all points in the stream. Our lower bound demonstrates that any algorithm $mathcal{A}$ solving sampling problems in turnstile streams in low memory can be used to encode subsets of $[n]$ of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to $mathcal{A}$ throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoders interactions with $mathcal{A}$, which is loosely motivated by techniques in differential privacy. Our correctness analysis involves understanding the ability of $mathcal{A}$ to correctly answer adaptive queries which have positive but bounded mutual information with $mathcal{A}$s internal randomness, and may be of independent interest in the newly emerging area of adaptive data analysis with a theoretical computer science lens.
In the communication problem $mathbf{UR}$ (universal relation) [KRW95], Alice and Bob respectively receive $x, y in{0,1}^n$ with the promise that $x eq y$. The last player to receive a message must output an index $i$ such that $x_i eq y_i$. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly $Theta(min{n,log(1/delta)log^2(frac n{log(1/delta)})})$ for failure probability $delta$. Our lower bound holds even if promised $mathop{support}(y)subset mathop{support}(x)$. As a corollary, we obtain optimal lower bounds for $ell_p$-sampling in strict turnstile streams for $0le p < 2$, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if promised $xin{0,1}^n$ at all points in the stream. We give two different proofs of our main result. The first proof demonstrates that any algorithm $mathcal A$ solving sampling problems in turnstile streams in low memory can be used to encode subsets of $[n]$ of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to $mathcal A$ throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoders interactions with $mathcal A$, which is loosely motivated by techniques in differential privacy. Our second proof is via a novel randomized reduction from Augmented Indexing [MNSW98] which needs to interact with $mathcal A$ adaptively. To handle the adaptivity we identify certain likely interaction patterns and union bound over them to guarantee correct interaction on all of them. To guarantee correctness, it is important that the interaction hides some of its randomness from $mathcal A$ in the reduction.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا