ترغب بنشر مسار تعليمي؟ اضغط هنا

Hardness of Peeling with Stashes

213   0   0.0 ( 0 )
 نشر من قبل Vikram Nathan
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The analysis of several algorithms and data structures can be framed as a peeling process on a random hypergraph: vertices with degree less than k and their adjacent edges are removed until no vertices of degree less than k are left. Often the question is whether the remaining hypergraph, the k-core, is empty or not. In some settings, it may be possible to remove either vertices or edges from the hypergraph before peeling, at some cost. For example, in hashing applications where keys correspond to edges and buckets to vertices, one might use an additional side data structure, commonly referred to as a stash, to separately handle some keys in order to avoid collisions. The natural question in such cases is to find the minimum number of edges (or vertices) that need to be stashed in order to realize an empty k-core. We show that both these problems are NP-complete for all $k geq 2$ on graphs and regular hypergraphs, with the sole exception being that the edge variant of stashing is solvable in polynomial time for $k = 2$ on standard (2-uniform) graphs.



قيم البحث

اقرأ أيضاً

We consider the complexity properties of modern puzzle games, Hexiom, Cut the Rope and Back to Bed. The complexity of games plays an important role in the type of experience they provide to players. Back to Bed is shown to be PSPACE-Hard and the firs t two are shown to be NP-Hard. These results give further insight into the structure of these games and the resulting constructions may be useful in further complexity studies.
We study the problem of approximating the value of a Unique Game instance in the streaming model. A simple count of the number of constraints divided by $p$, the alphabet size of the Unique Game, gives a trivial $p$-approximation that can be computed in $O(log n)$ space. Meanwhile, with high probability, a sample of $tilde{O}(n)$ constraints suffices to estimate the optimal value to $(1+epsilon)$ accuracy. We prove that any single-pass streaming algorithm that achieves a $(p-epsilon)$-approximation requires $Omega_epsilon(sqrt{n})$ space. Our proof is via a reduction from lower bounds for a communication problem that is a $p$-ary variant of the Boolean Hidden Matching problem studied in the literature. Given the utility of Unique Games as a starting point for reduction to other optimization problems, our strong hardness for approximating Unique Games could lead to downemph{stream} hardness results for streaming approximability for other CSP-like problems.
133 - Dhruv Rohatgi 2019
The Earth Mover Distance (EMD) between two sets of points $A, B subseteq mathbb{R}^d$ with $|A| = |B|$ is the minimum total Euclidean distance of any perfect matching between $A$ and $B$. One of its generalizations is asymmetric EMD, which is the min imum total Euclidean distance of any matching of size $|A|$ between sets of points $A,B subseteq mathbb{R}^d$ with $|A| leq |B|$. The problems of computing EMD and asymmetric EMD are well-studied and have many applications in computer science, some of which also ask for the EMD-optimal matching itself. Unfortunately, all known algorithms require at least quadratic time to compute EMD exactly. Approximation algorithms with nearly linear time complexity in $n$ are known (even for finding approximately optimal matchings), but suffer from exponential dependence on the dimension. In this paper we show that significant improvements in exact and approximate algorithms for EMD would contradict conjectures in fine-grained complexity. In particular, we prove the following results: (1) Under the Orthogonal Vectors Conjecture, there is some $c>0$ such that EMD in $Omega(c^{log^* n})$ dimensions cannot be computed in truly subquadratic time. (2) Under the Hitting Set Conjecture, for every $delta>0$, no truly subquadratic time algorithm can find a $(1 + 1/n^delta)$-approximate EMD matching in $omega(log n)$ dimensions. (3) Under the Hitting Set Conjecture, for every $eta = 1/omega(log n)$, no truly subquadratic time algorithm can find a $(1 + eta)$-approximate asymmetric EMD matching in $omega(log n)$ dimensions.
We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo M, for various choices of the modulus M. Due to the known classification of tractable Boolean CSPs , this mainly reduces to the study of three cases: 2-SAT, HORN-SAT, and LIN-2 (linear equations mod 2). We classify the moduli M for which these respective problems are polynomial time solvable, and when they are not (assuming the ETH). Our study reveals that this modular constraint lends a surprising richness to these classic, well-studied problems, with interesting broader connections to complexity theory and coding theory. The HORN-SAT case is connected to the covering complexity of polynomials representing the NAND function mod M. The LIN-2 case is tied to the sparsity of polynomials representing the OR function mod M, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by Nagele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORN-SAT has strong similarities to their algorithm, and in particular identical kind of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems, and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite M.
In this work, we show the first worst-case to average-case reduction for the classical $k$-SUM problem. A $k$-SUM instance is a collection of $m$ integers, and the goal of the $k$-SUM problem is to find a subset of $k$ elements that sums to $0$. In t he average-case version, the $m$ elements are chosen uniformly at random from some interval $[-u,u]$. We consider the total setting where $m$ is sufficiently large (with respect to $u$ and $k$), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of $k$-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a run-time of $u^{O(1/log k)}$. This beats the known (conditional) lower bounds for worst-case $k$-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower-bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case $k$-SUM on $m$ elements in time $u^{o(1/log k)}$ will give a super-polynomial improvement in the complexity of algorithms for lattice problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا