ترغب بنشر مسار تعليمي؟ اضغط هنا

Streaming Hardness of Unique Games

137   0   0.0 ( 0 )
 نشر من قبل Runzhou Tao
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of approximating the value of a Unique Game instance in the streaming model. A simple count of the number of constraints divided by $p$, the alphabet size of the Unique Game, gives a trivial $p$-approximation that can be computed in $O(log n)$ space. Meanwhile, with high probability, a sample of $tilde{O}(n)$ constraints suffices to estimate the optimal value to $(1+epsilon)$ accuracy. We prove that any single-pass streaming algorithm that achieves a $(p-epsilon)$-approximation requires $Omega_epsilon(sqrt{n})$ space. Our proof is via a reduction from lower bounds for a communication problem that is a $p$-ary variant of the Boolean Hidden Matching problem studied in the literature. Given the utility of Unique Games as a starting point for reduction to other optimization problems, our strong hardness for approximating Unique Games could lead to downemph{stream} hardness results for streaming approximability for other CSP-like problems.

قيم البحث

اقرأ أيضاً

These are the lecture notes for the DIMACS Tutorial Limits of Approximation Algorithms: PCPs and Unique Games held at the DIMACS Center, CoRE Building, Rutgers University on 20-21 July, 2009. This tutorial was jointly sponsored by the DIMACS Special Focus on Hardness of Approximation, the DIMACS Special Focus on Algorithmic Foundations of the Internet, and the Center for Computational Intractability with support from the National Security Agency and the National Science Foundation. The speakers at the tutorial were Matthew Andrews, Sanjeev Arora, Moses Charikar, Prahladh Harsha, Subhash Khot, Dana Moshkovitz and Lisa Zhang. The sribes were Ashkan Aazami, Dev Desai, Igor Gorodezky, Geetha Jagannathan, Alexander S. Kulikov, Darakhshan J. Mir, Alantha Newman, Aleksandar Nikolov, David Pritchard and Gwen Spencer.
We consider the complexity properties of modern puzzle games, Hexiom, Cut the Rope and Back to Bed. The complexity of games plays an important role in the type of experience they provide to players. Back to Bed is shown to be PSPACE-Hard and the firs t two are shown to be NP-Hard. These results give further insight into the structure of these games and the resulting constructions may be useful in further complexity studies.
123 - Alexandra Kolla 2011
We give a new algorithm for Unique Games which is based on purely {em spectral} techniques, in contrast to previous work in the area, which relies heavily on semidefinite programming (SDP). Given a highly satisfiable instance of Unique Games, our alg orithm is able to recover a good assignment. The approximation guarantee depends only on the completeness of the game, and not on the alphabet size, while the running time depends on spectral properties of the {em Label-Extended} graph associated with the instance of Unique Games. We further show that on input the integrality gap instance of Khot and Vishnoi, our algorithm runs in quasi-polynomial time and decides that the instance if highly unsatisfiable. Notably, when run on this instance, the standard SDP relaxation of Unique Games {em fails}. As a special case, we also re-derive a polynomial time algorithm for Unique Games on expander constraint graphs. The main ingredient of our algorithm is a technique to effectively use the full spectrum of the underlying graph instead of just the second eigenvalue, which is of independent interest. The question of how to take advantage of the full spectrum of a graph in the design of algorithms has been often studied, but no significant progress was made prior to this work.
A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, app roximate counting, $ell_2$ approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm $A$, there exists a stream $x$ so that running $A$ twice on $x$ (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for $ell_2$ norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted.
In this work, we show the first worst-case to average-case reduction for the classical $k$-SUM problem. A $k$-SUM instance is a collection of $m$ integers, and the goal of the $k$-SUM problem is to find a subset of $k$ elements that sums to $0$. In t he average-case version, the $m$ elements are chosen uniformly at random from some interval $[-u,u]$. We consider the total setting where $m$ is sufficiently large (with respect to $u$ and $k$), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of $k$-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a run-time of $u^{O(1/log k)}$. This beats the known (conditional) lower bounds for worst-case $k$-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower-bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case $k$-SUM on $m$ elements in time $u^{o(1/log k)}$ will give a super-polynomial improvement in the complexity of algorithms for lattice problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا