ترغب بنشر مسار تعليمي؟ اضغط هنا

From Weak to Strong LP Gaps for all CSPs

215   0   0.0 ( 0 )
 نشر من قبل Madhur Tulsiani
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by $Omegaleft(frac{log n}{log log n}right)$ levels of the Sherali-Adams hierarchy on instances of size $n$. It was proved by Chan et al. [FOCS 2013] that any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy.. Combining this with our result also implies that any polynomial size LP extended formulation is no stronger than the basic LP. Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which $Omega(log log n)$ levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to $Omegaleft(frac{log n}{log log n}right)$ levels.

قيم البحث

اقرأ أيضاً

We give an efficient algorithm to strongly refute emph{semi-random} instances of all Boolean constraint satisfaction problems. The number of constraints required by our algorithm matches (up to polylogarithmic factors) the best-known bounds for effic ient refutation of fully random instances. Our main technical contribution is an algorithm to strongly refute semi-random instances of the Boolean $k$-XOR problem on $n$ variables that have $widetilde{O}(n^{k/2})$ constraints. (In a semi-random $k$-XOR instance, the equations can be arbitrary and only the right-hand sides are random.) One of our key insights is to identify a simple combinatorial property of random XOR instances that makes spectral refutation work. Our approach involves taking an instance that does not satisfy this property (i.e., is emph{not} pseudorandom) and reducing it to a partitioned collection of $2$-XOR instances. We analyze these subinstances using a carefully chosen quadratic form as a proxy, which in turn is bounded via a combination of spectral methods and semidefinite programming. The analysis of our spectral bounds relies only on an off-the-shelf matrix Bernstein inequality. Even for the purely random case, this leads to a shorter proof compared to the ones in the literature that rely on problem-specific trace-moment computations.
A Boolean constraint satisfaction problem (CSP), Max-CSP$(f)$, is a maximization problem specified by a constraint $f:{-1,1}^kto{0,1}$. An instance of the problem consists of $m$ constraint applications on $n$ Boolean variables, where each constraint application applies the constraint to $k$ literals chosen from the $n$ variables and their negations. The goal is to compute the maximum number of constraints that can be satisfied by a Boolean assignment to the $n$~variables. In the $(gamma,beta)$-approximation version of the problem for parameters $gamma geq beta in [0,1]$, the goal is to distinguish instances where at least $gamma$ fraction of the constraints can be satisfied from instances where at most $beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of Max-CSP$(f)$ in the (dynamic) streaming setting, where constraints are inserted (and may also be deleted in the dynamic setting) one at a time. We completely characterize the approximability of all Boolean CSPs in the dynamic streaming setting. Specifically, given $f$, $gamma$ and $beta$ we show that either (1) the $(gamma,beta)$-approximation version of Max-CSP$(f)$ has a probabilistic dynamic streaming algorithm using $O(log n)$ space, or (2) for every $varepsilon > 0$ the $(gamma-varepsilon,beta+varepsilon)$-approximation version of Max-CSP$(f)$ requires $Omega(sqrt{n})$ space for probabilistic dynamic streaming algorithms. We also extend previously known results in the insertion-only setting to a wide variety of cases, and in particular the case of $k=2$ where we get a dichotomy and the case when the satisfying assignments of $f$ support a distribution on ${-1,1}^k$ with uniform marginals.
We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on $n$ variables taking values in ${0,ldots,q-1}$, we prove that improving over the trivial approximability by a factor of $q$ requires $Omega(n)$ space even on instances with $O(n)$ constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires $Omega(n)$ space. The key technical core is an optimal, $q^{-(k-1)}$-inapproximability for the case where every constraint is given by a system of $k-1$ linear equations $bmod; q$ over $k$ variables. Prior to our work, no such hardness was known for an approximation factor less than $1/2$ for any CSP. Our work builds on and extends the work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the max cut in graphs. This corresponds roughly to the case of Max $k$-LIN-$bmod; q$ with $k=q=2$. Each one of the extensions provides non-trivial technical challenges that we overcome in this work.
We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo M, for various choices of the modulus M. Due to the known classification of tractable Boolean CSPs , this mainly reduces to the study of three cases: 2-SAT, HORN-SAT, and LIN-2 (linear equations mod 2). We classify the moduli M for which these respective problems are polynomial time solvable, and when they are not (assuming the ETH). Our study reveals that this modular constraint lends a surprising richness to these classic, well-studied problems, with interesting broader connections to complexity theory and coding theory. The HORN-SAT case is connected to the covering complexity of polynomials representing the NAND function mod M. The LIN-2 case is tied to the sparsity of polynomials representing the OR function mod M, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by Nagele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORN-SAT has strong similarities to their algorithm, and in particular identical kind of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems, and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite M.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا