Do you want to publish a course? Click here

A PRG for Lipschitz Functions of Polynomials with Applications to Sparsest Cut

124   0   0.0 ( 0 )
 Added by Raghu Meka
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

We give improved pseudorandom generators (PRGs) for Lipschitz functions of low-degree polynomials over the hypercube. These are functions of the form psi(P(x)), where P is a low-degree polynomial and psi is a function with small Lipschitz constant. PRGs for smooth functions of low-degree polynomials have received a lot of attention recently and play an important role in constructing PRGs for the natural class of polynomial threshold functions. In spite of the recent progress, no nontrivial PRGs were known for fooling Lipschitz functions of degree O(log n) polynomials even for constant error rate. In this work, we give the first such generator obtaining a seed-length of (log n)tilde{O}(d^2/eps^2) for fooling degree d polynomials with error eps. Previous generators had an exponential dependence on the degree. We use our PRG to get better integrality gap instances for sparsest cut, a fundamental problem in graph theory with many applications in graph optimization. We give an instance of uniform sparsest cut for which a powerful semi-definite relaxation (SDP) first introduced by Goemans and Linial and studied in the seminal work of Arora, Rao and Vazirani has an integrality gap of exp(Omega((log log n)^{1/2})). Understanding the performance of the Goemans-Linial SDP for uniform sparsest cut is an important open problem in approximation algorithms and metric embeddings and our work gives a near-exponential improvement over previous lower bounds which achieved a gap of Omega(log log n).



rate research

Read More

In this work, we establish lower-bounds against memory bounded algorithms for distinguishing between natural pairs of related distributions from samples that arrive in a streaming setting. In our first result, we show that any algorithm that distinguishes between uniform distribution on ${0,1}^n$ and uniform distribution on an $n/2$-dimensional linear subspace of ${0,1}^n$ with non-negligible advantage needs $2^{Omega(n)}$ samples or $Omega(n^2)$ memory. Our second result applies to distinguishing outputs of Goldreichs local pseudorandom generator from the uniform distribution on the output domain. Specifically, Goldreichs pseudorandom generator $G$ fixes a predicate $P:{0,1}^k rightarrow {0,1}$ and a collection of subsets $S_1, S_2, ldots, S_m subseteq [n]$ of size $k$. For any seed $x in {0,1}^n$, it outputs $P(x_{S_1}), P(x_{S_2}), ldots, P(x_{S_m})$ where $x_{S_i}$ is the projection of $x$ to the coordinates in $S_i$. We prove that whenever $P$ is $t$-resilient (all non-zero Fourier coefficients of $(-1)^P$ are of degree $t$ or higher), then no algorithm, with $<n^epsilon$ memory, can distinguish the output of $G$ from the uniform distribution on ${0,1}^m$ with a large inverse polynomial advantage, for stretch $m le left(frac{n}{t}right)^{frac{(1-epsilon)}{36}cdot t}$ (barring some restrictions on $k$). The lower bound holds in the streaming model where at each time step $i$, $S_isubseteq [n]$ is a randomly chosen (ordered) subset of size $k$ and the distinguisher sees either $P(x_{S_i})$ or a uniformly random bit along with $S_i$. Our proof builds on the recently developed machinery for proving time-space trade-offs (Raz 2016 and follow-ups) for search/learning problems.
Let $G = (V,w)$ be a weighted undirected graph with $m$ edges. The cut dimension of $G$ is the dimension of the span of the characteristic vectors of the minimum cuts of $G$, viewed as vectors in ${0,1}^m$. For every $n ge 2$ we show that the cut dimension of an $n$-vertex graph is at most $2n-3$, and construct graphs realizing this bound. The cut dimension was recently defined by Graur et al. cite{GPRW20}, who show that the maximum cut dimension of an $n$-vertex graph is a lower bound on the number of cut queries needed by a deterministic algorithm to solve the minimum cut problem on $n$-vertex graphs. For every $nge 2$, Graur et al. exhibit a graph on $n$ vertices with cut dimension at least $3n/2 -2$, giving the first lower bound larger than $n$ on the deterministic cut query complexity of computing mincut. We observe that the cut dimension is even a lower bound on the number of emph{linear} queries needed by a deterministic algorithm to solve mincut, where a linear query can ask any vector $x in mathbb{R}^{binom{n}{2}}$ and receives the answer $w^T x$. Our results thus show a lower bound of $2n-3$ on the number of linear queries needed by a deterministic algorithm to solve minimum cut on $n$-vertex graphs, and imply that one cannot show a lower bound larger than this via the cut dimension. We further introduce a generalization of the cut dimension which we call the $ell_1$-approximate cut dimension. The $ell_1$-approximate cut dimension is also a lower bound on the number of linear queries needed by a deterministic algorithm to compute minimum cut. It is always at least as large as the cut dimension, and we construct an infinite family of graphs on $n=3k+1$ vertices with $ell_1$-approximate cut dimension $2n-2$, showing that it can be strictly larger than the cut dimension.
We study the NP-hard textsc{$k$-Sparsest Cut} problem ($k$SC) in which, given an undirected graph $G = (V, E)$ and a parameter $k$, the objective is to partition vertex set into $k$ subsets whose maximum edge expansion is minimized. Herein, the edge expansion of a subset $S subseteq V$ is defined as the sum of the weights of edges exiting $S$ divided by the number of vertices in $S$. Another problem that has been investigated is textsc{$k$-Small-Set Expansion} problem ($k$SSE), which aims to find a subset with minimum edge expansion with a restriction on the size of the subset. We extend previous studies on $k$SC and $k$SSE by inspecting their parameterized complexity. On the positive side, we present two FPT algorithms for both $k$SSE and 2SC problems where in the first algorithm we consider the parameter treewidth of the input graph and uses exponential space, and in the second we consider the parameter vertex cover number of the input graph and uses polynomial space. Moreover, we consider the unweighted version of the $k$SC problem where $k geq 2$ is fixed and proposed two FPT algorithms with parameters treewidth and vertex cover number of the input graph. We also propose a randomized FPT algorithm for $k$SSE when parameterized by $k$ and the maximum degree of the input graph combined. Its derandomization is done efficiently. oindent On the negative side, first we prove that for every fixed integer $k,taugeq 3$, the problem $k$SC is NP-hard for graphs with vertex cover number at most $tau$. We also show that $k$SC is W[1]-hard when parameterized by the treewidth of the input graph and the number~$k$ of components combined using a reduction from textsc{Unary Bin Packing}. Furthermore, we prove that $k$SC remains NP-hard for graphs with maximum degree three and also graphs with degeneracy two. Finally, we prove that the unweighted $k$SSE is W[1]-hard for the parameter $k$.
75 - Michael Lampis 2021
A stable cut of a graph is a cut whose weight cannot be increased by changing the side of a single vertex. Equivalently, a cut is stable if all vertices have the (weighted) majority of their neighbors on the other side. In this paper we study Min Stable Cut, the problem of finding a stable cut of minimum weight, which is closely related to the Price of Anarchy of the Max Cut game. Since this problem is NP-hard, we study its complexity on graphs of low treewidth, low degree, or both. We show that the problem is weakly NP-hard on severely restricted trees, so bounding treewidth alone cannot make it tractable. We match this with a pseudo-polynomial DP algorithm running in time $(Deltacdot W)^{O(tw)}n^{O(1)}$, where $tw$ is the treewidth, $Delta$ the maximum degree, and $W$ the maximum weight. On the other hand, bounding $Delta$ is also not enough, as the problem is NP-hard for unweighted graphs of bounded degree. We therefore parameterize Min Stable Cut by both $tw+Delta$ and obtain an FPT algorithm running in time $2^{O(Delta tw)}(n+log W)^{O(1)}$. Our main result is to provide a reduction showing that both aforementioned algorithms are essentially optimal, even if we replace treewidth by pathwidth: if there exists an algorithm running in $(nW)^{o(pw)}$ or $2^{o(Delta pw)}(n+log W)^{O(1)}$, then the ETH is false. Complementing this, we show that we can obtain an FPT approximation scheme parameterized by treewidth, if we consider almost-stable solutions. Motivated by these mostly negative results, we consider Unweighted Min Stable Cut. Here our results already imply a much faster exact algorithm running in time $Delta^{O(tw)}n^{O(1)}$. We show that this is also probably essentially optimal: an algorithm running in $n^{o(pw)}$ would contradict the ETH.
In analogy with the regularity lemma of Szemeredi, regularity lemmas for polynomials shown by Green and Tao (Contrib. Discrete Math. 2009) and by Kaufman and Lovett (FOCS 2008) modify a given collection of polynomials calF = {P_1,...,P_m} to a new collection calF so that the polynomials in calF are pseudorandom. These lemmas have various applications, such as (special cases) of Reed-Muller testing and worst-case to average-case reductions for polynomials. However, the transformation from calF to calF is not algorithmic for either regularity lemma. We define new notions of regularity for polynomials, which are analogous to the above, but which allow for an efficient algorithm to compute the pseudorandom collection calF. In particular, when the field is of high characteristic, in polynomial time, we can refine calF into calF where every nonzero linear combination of polynomials in calF has desirably small Gowers norm. Using the algorithmic regularity lemmas, we show that if a polynomial P of degree d is within (normalized) Hamming distance 1-1/|F| -eps of some unknown polynomial of degree k over a prime field F (for k < d < |F|), then there is an efficient algorithm for finding a degree-k polynomial Q, which is within distance 1-1/|F| -eta of P, for some eta depending on eps. This can be thought of as decoding the Reed-Muller code of order k beyond the list decoding radius (finding one close codeword), when the received word P itself is a polynomial of degree d (with k < d < |F|). We also obtain an algorithmic version of the worst-case to average-case reductions by Kaufman and Lovett. They show that if a polynomial of degree d can be weakly approximated by a polynomial of lower degree, then it can be computed exactly using a collection of polynomials of degree at most d-1. We give an efficient (randomized) algorithm to find this collection.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا