Do you want to publish a course? Click here

On the Hardness of Average-case k-SUM

247   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this work, we show the first worst-case to average-case reduction for the classical $k$-SUM problem. A $k$-SUM instance is a collection of $m$ integers, and the goal of the $k$-SUM problem is to find a subset of $k$ elements that sums to $0$. In the average-case version, the $m$ elements are chosen uniformly at random from some interval $[-u,u]$. We consider the total setting where $m$ is sufficiently large (with respect to $u$ and $k$), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of $k$-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a run-time of $u^{O(1/log k)}$. This beats the known (conditional) lower bounds for worst-case $k$-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower-bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case $k$-SUM on $m$ elements in time $u^{o(1/log k)}$ will give a super-polynomial improvement in the complexity of algorithms for lattice problems.



rate research

Read More

We study the problem of approximating the value of a Unique Game instance in the streaming model. A simple count of the number of constraints divided by $p$, the alphabet size of the Unique Game, gives a trivial $p$-approximation that can be computed in $O(log n)$ space. Meanwhile, with high probability, a sample of $tilde{O}(n)$ constraints suffices to estimate the optimal value to $(1+epsilon)$ accuracy. We prove that any single-pass streaming algorithm that achieves a $(p-epsilon)$-approximation requires $Omega_epsilon(sqrt{n})$ space. Our proof is via a reduction from lower bounds for a communication problem that is a $p$-ary variant of the Boolean Hidden Matching problem studied in the literature. Given the utility of Unique Games as a starting point for reduction to other optimization problems, our strong hardness for approximating Unique Games could lead to downemph{stream} hardness results for streaming approximability for other CSP-like problems.
A function $f:[n_1]timesdotstimes[n_d]tomathbb{F}_2$ is a direct sum if it is of the form $fleft(a_1,dots,a_dright) = f_1(a_1)oplusdots oplus f_d (a_d),$ for some $d$ functions $f_i:[n_i]tomathbb{F}_2$ for all $i=1,dots, d$, and where $n_1,dots,n_dinmathbb{N}$. We present a $4$-query test which distinguishes between direct sums and functions that are far from them. The test relies on the BLR linearity test (Blum, Luby, Rubinfeld, 1993) and on an agreement test which slightly generalizes the direct product test (Dinur, Steurer, 2014). In multiplicative $pm 1$ notation, our result reads as follows. A $d$-dimensional tensor with $pm 1$ entries is called a tensor product if it is a tensor product of $d$ vectors with $pm 1$ entries, or equivalently, if it is of rank $1$. The presented tests can be read as tests for distinguishing between tensor products and tensors that are far from being tensor products. We also present a different test, which queries the function at most $(d+2)$ times, but is easier to analyze.
We study the problem of efficiently refuting the k-colorability of a graph, or equivalently certifying a lower bound on its chromatic number. We give formal evidence of average-case computational hardness for this problem in sparse random regular graphs, showing optimality of a simple spectral certificate. This evidence takes the form of a computationally-quiet planting: we construct a distribution of d-regular graphs that has significantly smaller chromatic number than a typical regular graph drawn uniformly at random, while providing evidence that these two distributions are indistinguishable by a large class of algorithms. We generalize our results to the more general problem of certifying an upper bound on the maximum k-cut. This quiet planting is achieved by minimizing the effect of the planted structure (e.g. colorings or cuts) on the graph spectrum. Specifically, the planted structure corresponds exactly to eigenvectors of the adjacency matrix. This avoids the pushout effect of random matrix theory, and delays the point at which the planting becomes visible in the spectrum or local statistics. To illustrate this further, we give similar results for a Gaussian analogue of this problem: a quiet version of the spiked model, where we plant an eigenspace rather than adding a generic low-rank perturbation. Our evidence for computational hardness of distinguishing two distributions is based on three different heuristics: stability of belief propagation, the local statistics hierarchy, and the low-degree likelihood ratio. Of independent interest, our results include general-purpose bounds on the low-degree likelihood ratio for multi-spiked matrix models, and an improved low-degree analysis of the stochastic block model.
We study the computational complexity of approximating the 2->q norm of linear operators (defined as ||A||_{2->q} = sup_v ||Av||_q/||v||_2), as well as connections between this question and issues arising in quantum information theory and the study of Khots Unique Games Conjecture (UGC). We show the following: 1. For any constant even integer q>=4, a graph $G$ is a small-set expander if and only if the projector into the span of the top eigenvectors of Gs adjacency matrix has bounded 2->q norm. As a corollary, a good approximation to the 2->q norm will refute the Small-Set Expansion Conjecture--a close variant of the UGC. We also show that such a good approximation can be obtained in exp(n^(2/q)) time, thus obtaining a different proof of the known subexponential algorithm for Small Set Expansion. 2. Constant rounds of the Sum of Squares semidefinite programing hierarchy certify an upper bound on the 2->4 norm of the projector to low-degree polynomials over the Boolean cube, as well certify the unsatisfiability of the noisy cube and short code based instances of Unique Games considered by prior works. This improves on the previous upper bound of exp(poly log n) rounds (for the short code), as well as separates the Sum of Squares/Lasserre hierarchy from weaker hierarchies that were known to require omega(1) rounds. 3. We show reductions between computing the 2->4 norm and computing the injective tensor norm of a tensor, a problem with connections to quantum information theory. Three corollaries are: (i) the 2->4 norm is NP-hard to approximate to precision inverse-polynomial in the dimension, (ii) the 2->4 norm does not have a good approximation (in the sense above) unless 3-SAT can be solved in time exp(sqrt(n) polylog(n)), and (iii) known algorithms for the quantum separability problem imply a non-trivial additive approximation for the 2->4 norm.
We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo M, for various choices of the modulus M. Due to the known classification of tractable Boolean CSPs, this mainly reduces to the study of three cases: 2-SAT, HORN-SAT, and LIN-2 (linear equations mod 2). We classify the moduli M for which these respective problems are polynomial time solvable, and when they are not (assuming the ETH). Our study reveals that this modular constraint lends a surprising richness to these classic, well-studied problems, with interesting broader connections to complexity theory and coding theory. The HORN-SAT case is connected to the covering complexity of polynomials representing the NAND function mod M. The LIN-2 case is tied to the sparsity of polynomials representing the OR function mod M, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by Nagele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORN-SAT has strong similarities to their algorithm, and in particular identical kind of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems, and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite M.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا