Do you want to publish a course? Click here

Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent

435   0   0.0 ( 0 )
 Added by Marc Lanctot
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this paper, we present exploitability descent, a new algorithm to compute approximate equilibria in two-player zero-sum extensive-form games with imperfect information, by direct policy optimization against worst-case opponents. We prove that when following this optimization, the exploitability of a players strategy converges asymptotically to zero, and hence when both players employ this optimization, the joint policies converge to a Nash equilibrium. Unlike fictitious play (XFP) and counterfactual regret minimization (CFR), our convergence result pertains to the policies being optimized rather than the average policies. Our experiments demonstrate convergence rates comparable to XFP and CFR in four benchmark games in the tabular case. Using function approximation, we find that our algorithm outperforms the tabular version in two of the games, which, to the best of our knowledge, is the first such result in imperfect information games among this class of algorithms.



rate research

Read More

We investigate the computation of equilibria in extensive-form games where ex ante correlation is possible, focusing on correlated equilibria requiring the least amount of communication between the players and the mediator. Motivated by the hardness results on the computation of normal-form correlated equilibria, we introduce the notion of normal-form coarse correlated equilibrium, extending the definition of coarse correlated equilibrium to sequential games. We show that, in two-player games without chance moves, an optimal (e.g., social welfare maximizing) normal-form coarse correlated equilibrium can be computed in polynomial time, and that in general multi-player games (including two-player games with Chance), the problem is NP-hard. For the former case, we provide a polynomial-time algorithm based on the ellipsoid method and also propose a more practical one, which can be efficiently applied to problems of considerable size. Then, we discuss how our algorithm can be extended to games with Chance and games with more than two players.
We study an information-structure design problem (a.k.a. persuasion) with a single sender and multiple receivers with actions of a priori unknown types, independently drawn from action-specific marginal distributions. As in the standard Bayesian persuasion model, the sender has access to additional information regarding the action types, which she can exploit when committing to a (noisy) signaling scheme through which she sends a private signal to each receiver. The novelty of our model is in considering the case where the receivers interact in a sequential game with imperfect information, with utilities depending on the game outcome and the realized action types. After formalizing the notions of ex ante and ex interim persuasiveness (which differ in the time at which the receivers commit to following the senders signaling scheme), we investigate the continuous optimization problem of computing a signaling scheme which maximizes the senders expected revenue. We show that computing an optimal ex ante persuasive signaling scheme is NP-hard when there are three or more receivers. In contrast with previous hardness results for ex interim persuasion, we show that, for games with two receivers, an optimal ex ante persuasive signaling scheme can be computed in polynomial time thanks to a novel algorithm based on the ellipsoid method which we propose.
We study a wide class of non-convex non-concave min-max games that generalizes over standard bilinear zero-sum games. In this class, players control the inputs of a smooth function whose output is being applied to a bilinear zero-sum game. This class of games is motivated by the indirect nature of the competition in Generative Adversarial Networks, where players control the parameters of a neural network while the actual competition happens between the distributions that the generator and discriminator capture. We establish theoretically, that depending on the specific instance of the problem gradient-descent-ascent dynamics can exhibit a variety of behaviors antithetical to convergence to the game theoretically meaningful min-max solution. Specifically, different forms of recurrent behavior (including periodicity and Poincare recurrence) are possible as well as convergence to spurious (non-min-max) equilibria for a positive measure of initial conditions. At the technical level, our analysis combines tools from optimization theory, game theory and dynamical systems.
A standard metric used to measure the approximate optimality of policies in imperfect information games is exploitability, i.e. the performance of a policy against its worst-case opponent. However, exploitability is intractable to compute in large games as it requires a full traversal of the game tree to calculate a best response to the given policy. We introduce a new metric, approximate exploitability, that calculates an analogous metric using an approximate best response; the approximation is done by using search and reinforcement learning. This is a generalization of local best response, a domain specific evaluation metric used in poker. We provide empirical results for a specific instance of the method, demonstrating that our method converges to exploitability in the tabular and function approximation settings for small games. In large games, our method learns to exploit both strong and weak agents, learning to exploit an AlphaZero agent.
We focus on adversarial patrolling games on arbitrary graphs, where the Defender can control a mobile resource, the targets are alarmed by an alarm system, and the Attacker can observe the actions of the mobile resource of the Defender and perform different attacks exploiting multiple resources. This scenario can be modeled as a zero-sum extensive-form game in which each player can play multiple times. The game tree is exponentially large both in the size of the graph and in the number of attacking resources. We show that when the number of the Attackers resources is free, the problem of computing the equilibrium path is NP-hard, while when the number of resources is fixed, the equilibrium path can be computed in poly-time. We provide a dynamic-programming algorithm that, given the number of the Attackers resources, computes the equilibrium path requiring poly-time in the size of the graph and exponential time in the number of the resources. Furthermore, since in real-world scenarios it is implausible that the Defender knows the number of attacking resources, we study the robustness of the Defenders strategy when she makes a wrong guess about that number. We show that even the error of just a single resource can lead to an arbitrary inefficiency, when the inefficiency is defined as the ratio of the Defenders utilities obtained with a wrong guess and a correct guess. However, a more suitable definition of inefficiency is given by the difference of the Defenders utilities: this way, we observe that the higher the error in the estimation, the higher the loss for the Defender. Then, we investigate the performance of online algorithms when no information about the Attackers resources is available. Finally, we resort to randomized online algorithms showing that we can obtain a competitive factor that is twice better than the one that can be achieved by any deterministic online algorithm.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا