No Arabic abstract
In this paper, we study the average case complexity of the Unique Games problem. We propose a natural semi-random model, in which a unique game instance is generated in several steps. First an adversary selects a completely satisfiable instance of Unique Games, then she chooses an epsilon-fraction of all edges, and finally replaces (corrupts) the constraints corresponding to these edges with new constraints. If all steps are adversarial, the adversary can obtain any (1-epsilon) satisfiable instance, so then the problem is as hard as in the worst case. In our semi-random model, one of the steps is random, and all other steps are adversarial. We show that known algorithms for unique games (in particular, all algorithms that use the standard SDP relaxation) fail to solve semi-random instances of Unique Games. We present an algorithm that with high probability finds a solution satisfying a (1-delta) fraction of all constraints in semi-random instances (we require that the average degree of the graph is Omega(log k). To this end, we consider a new non-standard SDP program for Unique Games, which is not a relaxation for the problem, and show how to analyze it. We present a new rounding scheme that simultaneously uses SDP and LP solutions, which we believe is of independent interest. Our result holds only for epsilon less than some absolute constant. We prove that if epsilon > 1/2, then the problem is hard in one of the models, the result assumes the 2-to-2 conjecture. Finally, we study semi-random instances of Unique Games that are at most (1-epsilon) satisfiable. We present an algorithm that with high probability, distinguishes between the case when the instance is a semi-random instance and the case when the instance is an (arbitrary) (1-delta) satisfiable instance if epsilon > c delta.
In this note we improve a recent result by Arora, Khot, Kolla, Steurer, Tulsiani, and Vishnoi on solving the Unique Games problem on expanders. Given a $(1-varepsilon)$-satisfiable instance of Unique Games with the constraint graph $G$, our algorithm finds an assignment satisfying at least a $1- C varepsilon/h_G$ fraction of all constraints if $varepsilon < c lambda_G$ where $h_G$ is the edge expansion of $G$, $lambda_G$ is the second smallest eigenvalue of the Laplacian of $G$, and $C$ and $c$ are some absolute constants.
We show how two techniques from statistical physics can be adapted to solve a variant of the notorious Unique Games problem, potentially opening new avenues towards the Unique Games Conjecture. The variant, which we call Count Unique Games, is a promise problem in which the yes case guarantees a certain number of highly satisfiable assignments to the Unique Games instance. In the standard Unique Games problem, the yes case only guarantees at least one such assignment. We exhibit efficient algorithms for Count Unique Games based on approximating a suitable partition function for the Unique Games instance via (i) a zero-free region and polynomial interpolation, and (ii) the cluster expansion. We also show that a modest improvement to the parameters for which we give results would refute the Unique Games Conjecture.
We present a new dynamic matching sparsification scheme. From this scheme we derive a framework for dynamically rounding fractional matchings against emph{adaptive adversaries}. Plugging in known dynamic fractional matching algorithms into our framework, we obtain numerous randomized dynamic matching algorithms which work against adaptive adversaries (the first such algorithms, as all previous randomized algorithms for this problem assumed an emph{oblivious} adversary). In particular, for any constant $epsilon>0$, our framework yields $(2+epsilon)$-approximate algorithms with constant update time or polylog worst-case update time, as well as $(2-delta)$-approximate algorithms in bipartite graphs with arbitrarily-small polynomial update time, with all these algorithms guarantees holding against adaptive adversaries. All these results achieve emph{polynomially} better update time to approximation tradeoffs than previously known to be achievable against adaptive adversaries.
Motivated by storage applications, we study the following data structure problem: An encoder wishes to store a collection of jointly-distributed files $overline{X}:=(X_1,X_2,ldots, X_n) sim mu$ which are emph{correlated} ($H_mu(overline{X}) ll sum_i H_mu(X_i)$), using as little (expected) memory as possible, such that each individual file $X_i$ can be recovered quickly with few (ideally constant) memory accesses. In the case of independent random files, a dramatic result by Pat (FOCS08) and subsequently by Dodis, Pat and Thorup (STOC10) shows that it is possible to store $overline{X}$ using just a emph{constant} number of extra bits beyond the information-theoretic minimum space, while at the same time decoding each $X_i$ in constant time. However, in the (realistic) case where the files are correlated, much weaker results are known, requiring at least $Omega(n/polylg n)$ extra bits for constant decoding time, even for simple joint distributions $mu$. We focus on the natural case of compressingemph{Markov chains}, i.e., storing a length-$n$ random walk on any (possibly directed) graph $G$. Denoting by $kappa(G,n)$ the number of length-$n$ walks on $G$, we show that there is a succinct data structure storing a random walk using $lg_2 kappa(G,n) + O(lg n)$ bits of space, such that any vertex along the walk can be decoded in $O(1)$ time on a word-RAM. For the harder task of matching the emph{point-wise} optimal space of the walk, i.e., the empirical entropy $sum_{i=1}^{n-1} lg (deg(v_i))$, we present a data structure with $O(1)$ extra bits at the price of $O(lg n)$ decoding time, and show that any improvement on this would lead to an improved solution on the long-standing Dictionary problem. All of our data structures support the emph{online} version of the problem with constant update and query time.
Designing dynamic graph algorithms against an adaptive adversary is a major goal in the field of dynamic graph algorithms. While a few such algorithms are known for spanning trees, matchings, and single-source shortest paths, very little was known for an important primitive like graph sparsifiers. The challenge is how to approximately preserve so much information about the graph (e.g., all-pairs distances and all cuts) without revealing the algorithms underlying randomness to the adaptive adversary. In this paper we present the first non-trivial efficient adaptive algorithms for maintaining spanners and cut sparisifers. These algorithms in turn imply improvements over existing algorithms for other problems. Our first algorithm maintains a polylog$(n)$-spanner of size $tilde O(n)$ in polylog$(n)$ amortized update time. The second algorithm maintains an $O(k)$-approximate cut sparsifier of size $tilde O(n)$ in $tilde O(n^{1/k})$ amortized update time, for any $kge1$, which is polylog$(n)$ time when $k=log(n)$. The third algorithm maintains a polylog$(n)$-approximate spectral sparsifier in polylog$(n)$ amortized update time. The amortized update time of both algorithms can be made worst-case by paying some sub-polynomial factors. Prior to our result, there were near-optimal algorithms against oblivious adversaries (e.g. Baswana et al. [TALG12] and Abraham et al. [FOCS16]), but the only non-trivial adaptive dynamic algorithm requires $O(n)$ amortized update time to maintain $3$- and $5$-spanner of size $O(n^{1+1/2})$ and $O(n^{1+1/3})$, respectively [Ausiello et al. ESA05]. Our results are based on two novel techniques. The first technique, is a generic black-box reduction that allows us to assume that the graph undergoes only edge deletions and, more importantly, remains an expander with almost-uniform degree. The second technique we call proactive resampling. [...]