No Arabic abstract
Graphical games are a useful framework for modeling the interactions of (selfish) agents who are connected via an underlying topology and whose behaviors influence each other. They have wide applications ranging from computer science to economics and biology. Yet, even though a players payoff only depends on the actions of their direct neighbors in graphical games, computing the Nash equilibria and making statements about the convergence time of natural local dynamics in particular can be highly challenging. In this work, we present a novel approach for classifying complexity of Nash equilibria in graphical games by establishing a connection to local graph algorithms, a subfield of distributed computing. In particular, we make the observation that the equilibria of graphical games are equivalent to locally verifiable labelings (LVL) in graphs; vertex labelings which are verifiable with a constant-round local algorithm. This connection allows us to derive novel lower bounds on the convergence time to equilibrium of best-response dynamics in graphical games. Since we establish that distributed convergence can sometimes be provably slow, we also introduce and give bounds on an intuitive notion of time-constrained inefficiency of best responses. We exemplify how our results can be used in the implementation of mechanisms that ensure convergence of best responses to a Nash equilibrium. Our results thus also give insight into the convergence of strategy-proof algorithms for graphical games, which is still not well understood.
We investigate the complexity of bounding the uncertainty of graphical games, and we provide new insight into the intrinsic difficulty of computing Nash equilibria. In particular, we show that, if one adds very simple and natural additional requirements to a graphical game, the existence of Nash equilibria is no longer guaranteed, and computing an equilibrium is an intractable problem. Moreover, if stronger equilibrium conditions are required for the game, we get hardness results for the second level of the polynomial hierarchy. Our results offer a clear picture of the complexity of mixed Nash equilibria in graphical games, and answer some open research questions posed by Conitzer and Sandholm (2003).
We study the problem of checking for the existence of constrained pure Nash equilibria in a subclass of polymatrix games defined on weighted directed graphs. The payoff of a player is defined as the sum of nonnegative rational weights on incoming edges from players who picked the same strategy augmented by a fixed integer bonus for picking a given strategy. These games capture the idea of coordination within a local neighbourhood in the absence of globally common strategies. We study the decision problem of checking whether a given set of strategy choices for a subset of the players is consistent with some pure Nash equilibrium or, alternatively, with all pure Nash equilibria. We identify the most natural tractable cases and show NP or coNP-completness of these problems already for unweighted DAGs.
We prove that computing a Nash equilibrium of a two-player ($n times n$) game with payoffs in $[-1,1]$ is PPAD-hard (under randomized reductions) even in the smoothed analysis setting, smoothing with noise of constant magnitude. This gives a strong negative answer to conjectures of Spielman and Teng [ST06] and Cheng, Deng, and Teng [CDT09]. In contrast to prior work proving PPAD-hardness after smoothing by noise of magnitude $1/operatorname{poly}(n)$ [CDT09], our smoothed complexity result is not proved via hardness of approximation for Nash equilibria. This is by necessity, since Nash equilibria can be approximated to constant error in quasi-polynomial time [LMM03]. Our results therefore separate smoothed complexity and hardness of approximation for Nash equilibria in two-player games. The key ingredient in our reduction is the use of a random zero-sum game as a gadget to produce two-player games which remain hard even after smoothing. Our analysis crucially shows that all Nash equilibria of random zero-sum games are far from pure (with high probability), and that this remains true even after smoothing.
We show that the BIMATRIX game does not have a fully polynomial-time approximation scheme, unless PPAD is in P. In other words, no algorithm with time polynomial in n and 1/epsilon can compute an epsilon-approximate Nash equilibrium of an n by nbimatrix game, unless PPAD is in P. Instrumental to our proof, we introduce a new discrete fixed-point problem on a high-dimensional cube with a constant side-length, such as on an n-dimensional cube with side-length 7, and show that they are PPAD-complete. Furthermore, we prove, unless PPAD is in RP, that the smoothed complexity of the Lemke-Howson algorithm or any algorithm for computing a Nash equilibrium of a bimatrix game is polynomial in n and 1/sigma under perturbations with magnitude sigma. Our result answers a major open question in the smoothed analysis of algorithms and the approximation of Nash equilibria.
We investigate the computation of equilibria in extensive-form games where ex ante correlation is possible, focusing on correlated equilibria requiring the least amount of communication between the players and the mediator. Motivated by the hardness results on the computation of normal-form correlated equilibria, we introduce the notion of normal-form coarse correlated equilibrium, extending the definition of coarse correlated equilibrium to sequential games. We show that, in two-player games without chance moves, an optimal (e.g., social welfare maximizing) normal-form coarse correlated equilibrium can be computed in polynomial time, and that in general multi-player games (including two-player games with Chance), the problem is NP-hard. For the former case, we provide a polynomial-time algorithm based on the ellipsoid method and also propose a more practical one, which can be efficiently applied to problems of considerable size. Then, we discuss how our algorithm can be extended to games with Chance and games with more than two players.