We prove that every repeated game with countably many players, finite action sets, and tail-measurable payoffs admits an $epsilon$-equilibrium, for every $epsilon > 0$.
We consider normal-form games with $n$ players and two strategies for each player, where the payoffs are i.i.d. random variables with some distribution $F$ and we consider issues related to the pure equilibria in the game as the number of players div
erges. It is well-known that, if the distribution $F$ has no atoms, the random number of pure equilibria is asymptotically Poisson$(1)$. In the presence of atoms, it diverges. For each strategy profile, we consider the (random) average payoff of the players, called Average Social Utility (ASU). In particular, we examine the asymptotic behavior of the optimum ASU and the one associated to the best and worst pure Nash equilibria and we show that, although these quantities are random, they converge, as $ntoinfty$ to some deterministic quantities.
We address the problem of assessing the robustness of the equilibria in uncertain, multi-agent games. Specifically, we focus on generalized Nash equilibrium problems in aggregative form subject to linear coupling constraints affected by uncertainty w
ith a possibly unknown probability distribution. Within a data-driven context, we apply the scenario approach paradigm to provide a-posteriori feasibility certificates for the entire set of generalized Nash equilibria of the game. Then, we show that assessing the violation probability of such set merely requires to enumerate the constraints that ``shape it. For the class of aggregative games, this results in solving a feasibility problem on each active facet of the feasibility region, for which we propose a semi-decentralized algorithm. We demonstrate our theoretical results by means of an academic example.
The notion of emph{policy regret} in online learning is a well defined? performance measure for the common scenario of adaptive adversaries, which more traditional quantities such as external regret do not take into account. We revisit the notion of
policy regret and first show that there are online learning settings in which policy regret and external regret are incompatible: any sequence of play that achieves a favorable regret with respect to one definition must do poorly with respect to the other. We then focus on the game-theoretic setting where the adversary is a self-interested agent. In that setting, we show that external regret and policy regret are not in conflict and, in fact, that a wide class of algorithms can ensure a favorable regret with respect to both definitions, so long as the adversary is also using such an algorithm. We also show that the sequence of play of no-policy regret algorithms converges to a emph{policy equilibrium}, a new notion of equilibrium that we introduce. Relating this back to external regret, we show that coarse correlated equilibria, which no-external regret players converge to, are a strict subset of policy equilibria. Thus, in game-theoretic settings, every sequence of play with no external regret also admits no policy regret, but the converse does not hold.
We study a wide class of non-convex non-concave min-max games that generalizes over standard bilinear zero-sum games. In this class, players control the inputs of a smooth function whose output is being applied to a bilinear zero-sum game. This class
of games is motivated by the indirect nature of the competition in Generative Adversarial Networks, where players control the parameters of a neural network while the actual competition happens between the distributions that the generator and discriminator capture. We establish theoretically, that depending on the specific instance of the problem gradient-descent-ascent dynamics can exhibit a variety of behaviors antithetical to convergence to the game theoretically meaningful min-max solution. Specifically, different forms of recurrent behavior (including periodicity and Poincare recurrence) are possible as well as convergence to spurious (non-min-max) equilibria for a positive measure of initial conditions. At the technical level, our analysis combines tools from optimization theory, game theory and dynamical systems.
We discuss similarities and differencies between systems of many interacting players maximizing their individual payoffs and particles minimizing their interaction energy. We analyze long-run behavior of stochastic dynamics of many interacting agents
in spatial and adaptive population games. We review results concerning the effect of the number of players and the noise level on the stochastic stability of Nash equilibria. In particular, we present examples of games in which when the number of players or the noise level increases, a population undergoes a transition between its equilibria.
Galit Ashkenazi-Golan
,Janos Flesch
,Arkadi Predtetchinski
.
(2021)
.
"Equilibria in Repeated Games with Countably Many Players and Tail-Measurable Payoffs"
.
Eilon Solan
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا