ترغب بنشر مسار تعليمي؟ اضغط هنا

Sum-of-Squares meets Nash: Optimal Lower Bounds for Finding any Equilibrium

198   0   0.0 ( 0 )
 نشر من قبل Pravesh K Kothari
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Several works have shown unconditional hardness (via integrality gaps) of computing equilibria using strong hierarchies of convex relaxations. Such results however only apply to the problem of computing equilibria that optimize a certain objective function and not to the (arguably more fundamental) task of finding emph{any} equilibrium. We present an algorithmic model based on the sum-of-squares (SoS) hierarchy that allows escaping this inherent limitation of integrality gaps. In this model, algorithms access the input game only through a relaxed solution to the natural SoS relaxation for computing equilibria. They can then adaptively construct a list of candidate solutions and invoke a verification oracle to check if any candidate on the list is a solution. This model captures most well-studied approximation algorithms such as those for Max-Cut, Sparsest Cut, and Unique-Games. The state-of-the-art algorithms for computing exact and approximate equilibria in two-player, n-strategy games are captured in this model and require that at least one of i) size (~ running time) of the SoS relaxation or ii) the size of the list of candidates, be at least $2^{Omega(n)}$ and $n^{Omega(log{(n)})}$ respectively. Our main result shows a lower bound that matches these upper bound up to constant factors in the exponent. This can be interpreted as an unconditional confirmation, in our restricted algorithmic framework, of Rubinsteins recent conditional hardness cite{Rub} for computing approximate equilibria. Our proof strategy involves constructing a family of games that all share a common sum-of-squares solution but every (approximate) equilibrium of one game is far from every (approximate) equilibrium of any other game in the family.



قيم البحث

اقرأ أيضاً

We prove that for every $epsilon>0$ and predicate $P:{0,1}^krightarrow {0,1}$ that supports a pairwise independent distribution, there exists an instance $mathcal{I}$ of the $mathsf{Max}P$ constraint satisfaction problem on $n$ variables such that no assignment can satisfy more than a $tfrac{|P^{-1}(1)|}{2^k}+epsilon$ fraction of $mathcal{I}$s constraints but the degree $Omega(n)$ Sum of Squares semidefinite programming hierarchy cannot certify that $mathcal{I}$ is unsatisfiable. Similar results were previously only known for weaker hierarchies.
The degree-$4$ Sum-of-Squares (SoS) SDP relaxation is a powerful algorithm that captures the best known polynomial time algorithms for a broad range of problems including MaxCut, Sparsest Cut, all MaxCSPs and tensor PCA. Despite being an explicit alg orithm with relatively low computational complexity, the limits of degree-$4$ SoS SDP are not well understood. For example, existing integrality gaps do not rule out a $(2-varepsilon)$-algorithm for Vertex Cover or a $(0.878+varepsilon)$-algorithm for MaxCut via degree-$4$ SoS SDPs, each of which would refute the notorious Unique Games Conjecture. We exhibit an explicit mapping from solutions for degree-$2$ Sum-of-Squares SDP (Goemans-Williamson SDP) to solutions for the degree-$4$ Sum-of-Squares SDP relaxation on boolean variables. By virtue of this mapping, one can lift lower bounds for degree-$2$ SoS SDP relaxation to corresponding lower bounds for degree-$4$ SoS SDPs. We use this approach to obtain degree-$4$ SoS SDP lower bounds for MaxCut on random $d$-regular graphs, Sherington-Kirkpatrick model from statistical physics and PSD Grothendieck problem. Our constructions use the idea of pseudocalibration towards candidate SDP vectors, while it was previously only used to produce the candidate matrix which one would show is PSD using much technical work. In addition, we develop a different technique to bound the spectral norms of _graphical matrices_ that arise in the context of SoS SDPs. The technique is much simpler and yields better bounds in many cases than the _trace method_ -- which was the sole technique for this purpose.
In the communication problem $mathbf{UR}$ (universal relation) [KRW95], Alice and Bob respectively receive $x$ and $y$ in ${0,1}^n$ with the promise that $x eq y$. The last player to receive a message must output an index $i$ such that $x_i eq y_i$. We prove that the randomized one-way communication complexity of this problem in the public coin model is exactly $Theta(min{n, log(1/delta)log^2(frac{n}{log(1/delta)})})$ bits for failure probability $delta$. Our lower bound holds even if promised $mathop{support}(y)subset mathop{support}(x)$. As a corollary, we obtain optimal lower bounds for $ell_p$-sampling in strict turnstile streams for $0le p < 2$, as well as for the problem of finding duplicates in a stream. Our lower bounds do not need to use large weights, and hold even if it is promised that $xin{0,1}^n$ at all points in the stream. Our lower bound demonstrates that any algorithm $mathcal{A}$ solving sampling problems in turnstile streams in low memory can be used to encode subsets of $[n]$ of certain sizes into a number of bits below the information theoretic minimum. Our encoder makes adaptive queries to $mathcal{A}$ throughout its execution, but done carefully so as to not violate correctness. This is accomplished by injecting random noise into the encoders interactions with $mathcal{A}$, which is loosely motivated by techniques in differential privacy. Our correctness analysis involves understanding the ability of $mathcal{A}$ to correctly answer adaptive queries which have positive but bounded mutual information with $mathcal{A}$s internal randomness, and may be of independent interest in the newly emerging area of adaptive data analysis with a theoretical computer science lens.
Given a large data matrix $Ainmathbb{R}^{ntimes n}$, we consider the problem of determining whether its entries are i.i.d. with some known marginal distribution $A_{ij}sim P_0$, or instead $A$ contains a principal submatrix $A_{{sf Q},{sf Q}}$ whose entries have marginal distribution $A_{ij}sim P_1 eq P_0$. As a special case, the hidden (or planted) clique problem requires to find a planted clique in an otherwise uniformly random graph. Assuming unbounded computational resources, this hypothesis testing problem is statistically solvable provided $|{sf Q}|ge C log n$ for a suitable constant $C$. However, despite substantial effort, no polynomial time algorithm is known that succeeds with high probability when $|{sf Q}| = o(sqrt{n})$. Recently Meka and Wigderson cite{meka2013association}, proposed a method to establish lower bounds within the Sum of Squares (SOS) semidefinite hierarchy. Here we consider the degree-$4$ SOS relaxation, and study the construction of cite{meka2013association} to prove that SOS fails unless $kge C, n^{1/3}/log n$. An argument presented by Barak implies that this lower bound cannot be substantially improved unless the witness construction is changed in the proof. Our proof uses the moments method to bound the spectrum of a certain random association scheme, i.e. a symmetric random matrix whose rows and columns are indexed by the edges of an Erdos-Renyi random graph.
223 - Raghu Meka , Avi Wigderson 2013
Finding cliques in random graphs and the closely related planted clique variant, where a clique of size t is planted in a random G(n,1/2) graph, have been the focus of substantial study in algorithm design. Despite much effort, the best known polynom ial-time algorithms only solve the problem for t = Theta(sqrt(n)). Here we show that beating sqrt(n) would require substantially new algorithmic ideas, by proving a lower bound for the problem in the sum-of-squares (or Lasserre) hierarchy, the most powerful class of semi-definite programming algorithms we know of: r rounds of the sum-of-squares hierarchy can only solve the planted clique for t > sqrt(n)/(C log n)^(r^2). Previously, no nontrivial lower bounds were known. Our proof is formulated as a degree lower bound in the Positivstellensatz algebraic proof system, which is equivalent to the sum-of-squares hierarchy. The heart of our (average-case) lower bound is a proof that a certain random matrix derived from the input graph is (with high probability) positive semidefinite. Two ingredients play an important role in this proof. The first is the classical theory of association schemes, applied to the average and variance of that random matrix. The second is a new large deviation inequality for matrix-valued polynomials. Our new tail estimate seems to be of independent interest and may find other applications, as it generalizes both the estimates on real-valued polynomials and on sums of independent random matrices.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا