ترغب بنشر مسار تعليمي؟ اضغط هنا

Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions

88   0   0.0 ( 0 )
 نشر من قبل Yixin Tao
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Machine learning processes, e.g. learning in games, can be viewed as non-linear dynamical systems. In general, such systems exhibit a wide spectrum of behaviors, ranging from stability/recurrence to the undesirable phenomena of chaos (or butterfly effect). Chaos captures sensitivity of round-off errors and can severely affect predictability and reproducibility of ML systems, but AI/ML communitys understanding of it remains rudimentary. It has a lot out there that await exploration. Recently, Cheung and Piliouras employed volume-expansion argument to show that Lyapunov chaos occurs in the cumulative payoff space, when some popular learning algorithms, including Multiplicative Weights Update (MWU), Follow-the-Regularized-Leader (FTRL) and Optimistic MWU (OMWU), are used in several subspaces of games, e.g. zero-sum, coordination or graphical constant-sum games. It is natural to ask: can these results generalize to much broader families of games? We take on a game decomposition approach and answer the question affirmatively. Among other results, we propose a notion of matrix domination and design a linear program, and use them to characterize bimatrix games where MWU is Lyapunov chaotic almost everywhere. Such family of games has positive Lebesgue measure in the bimatrix game space, indicating that chaos is a substantial issue of learning in games. For multi-player games, we present a local equivalence of volume change between general games and graphical games, which is used to perform volume and chaos analyses of MWU and OMWU in potential games.

قيم البحث

اقرأ أيضاً

Despite the many recent practical and theoretical breakthroughs in computational game theory, equilibrium finding in extensive-form team games remains a significant challenge. While NP-hard in the worst case, there are provably efficient algorithms f or certain families of team game. In particular, if the game has common external information, also known as A-loss recall -- informally, actions played by non-team members (i.e., the opposing team or nature) are either unknown to the entire team, or common knowledge within the team -- then polynomial-time algorithms exist (Kaneko and Kline, 1995). In this paper, we devise a completely new algorithm for solving team games. It uses a tree decomposition of the constraint system representing each teams strategy to reduce the number and degree of constraints required for correctness (tightness of the mathematical program). Our algorithm reduces the problem of solving team games to a linear program with at most $NW^{w+O(1)}$ nonzero entries in the constraint matrix, where $N$ is the size of the game tree, $w$ is a parameter that depends on the amount of uncommon external information, and $W$ is the treewidth of the tree decomposition. In public-action games, our program size is bounded by the tighter $tilde O(3^t 2^{t(n-1)}NW)$ for teams of $n$ players with $t$ types each. Since our algorithm describes the polytope of correlated strategies directly, we get equilibrium finding in correlated strategies for free -- instead of, say, having to run a double oracle algorithm. We show via experiments on a standard suite of games that our algorithm achieves state-of-the-art performance on all benchmark game classes except one. We also present, to our knowledge, the first experiments for this setting where more than one team has more than one member.
Computational advertising has been studied to design efficient marketing strategies that maximize the number of acquired customers. In an increased competitive market, however, a market leader (a leader) requires the acquisition of new customers as w ell as the retention of her loyal customers because there often exists a competitor (a follower) who tries to attract customers away from the market leader. In this paper, we formalize a new model called the Stackelberg budget allocation game with a bipartite influence model by extending a budget allocation problem over a bipartite graph to a Stackelberg game. To find a strong Stackelberg equilibrium, a standard solution concept of the Stackelberg game, we propose two algorithms: an approximation algorithm with provable guarantees and an efficient heuristic algorithm. In addition, for a special case where customers are disjoint, we propose an exact algorithm based on linear programming. Our experiments using real-world datasets demonstrate that our algorithms outperform a baseline algorithm even when the follower is a powerful competitor.
In this paper we introduce a novel flow representation for finite games in strategic form. This representation allows us to develop a canonical direct sum decomposition of an arbitrary game into three components, which we refer to as the potential, h armonic and nonstrategic components. We analyze natural classes of games that are induced by this decomposition, and in particular, focus on games with no harmonic component and games with no potential component. We show that the first class corresponds to the well-known potential games. We refer to the second class of games as harmonic games, and study the structural and equilibrium properties of this new class of games. Intuitively, the potential component of a game captures interactions that can equivalently be represented as a common interest game, while the harmonic part represents the conflicts between the interests of the players. We make this intuition precise, by studying the properties of these two classes, and show that indeed they have quite distinct and remarkable characteristics. For instance, while finite potential games always have pure Nash equilibria, harmonic games generically never do. Moreover, we show that the nonstrategic component does not affect the equilibria of a game, but plays a fundamental role in their efficiency properties, thus decoupling the location of equilibria and their payoff-related properties. Exploiting the properties of the decomposition framework, we obtain explicit expressions for the projections of games onto the subspaces of potential and harmonic games. This enables an extension of the properties of potential and harmonic games to nearby games. We exemplify this point by showing that the set of approximate equilibria of an arbitrary game can be characterized through the equilibria of its projection onto the set of potential games.
We study multi-agent reinforcement learning (MARL) in infinite-horizon discounted zero-sum Markov games. We focus on the practical but challenging setting of decentralized MARL, where agents make decisions without coordination by a centralized contro ller, but only based on their own payoffs and local actions executed. The agents need not observe the opponents actions or payoffs, possibly being even oblivious to the presence of the opponent, nor be aware of the zero-sum structure of the underlying game, a setting also referred to as radically uncoupled in the literature of learning in games. In this paper, we develop for the first time a radically uncoupled Q-learning dynamics that is both rational and convergent: the learning dynamics converges to the best response to the opponents strategy when the opponent follows an asymptotically stationary strategy; the value function estimates converge to the payoffs at a Nash equilibrium when both agents adopt the dynamics. The key challenge in this decentralized setting is the non-stationarity of the learning environment from an agents perspective, since both her own payoffs and the system evolution depend on the actions of other agents, and each agent adapts their policies simultaneously and independently. To address this issue, we develop a two-timescale learning dynamics where each agent updates her local Q-function and value function estimates concurrently, with the latter happening at a slower timescale.
Zero-sum games have long guided artificial intelligence research, since they possess both a rich strategy space of best-responses and a clear evaluation metric. Whats more, competition is a vital mechanism in many real-world multi-agent systems capab le of generating intelligent innovations: Darwinian evolution, the market economy and the AlphaZero algorithm, to name a few. In two-player zero-sum games, the challenge is usually viewed as finding Nash equilibrium strategies, safeguarding against exploitation regardless of the opponent. While this captures the intricacies of chess or Go, it avoids the notion of cooperation with co-players, a hallmark of the major transitions leading from unicellular organisms to human civilization. Beyond two players, alliance formation often confers an advantage; however this requires trust, namely the promise of mutual cooperation in the face of incentives to defect. Successful play therefore requires adaptation to co-players rather than the pursuit of non-exploitability. Here we argue that a systematic study of many-player zero-sum games is a crucial element of artificial intelligence research. Using symmetric zero-sum matrix games, we demonstrate formally that alliance formation may be seen as a social dilemma, and empirically that naive multi-agent reinforcement learning therefore fails to form alliances. We introduce a toy model of economic competition, and show how reinforcement learning may be augmented with a peer-to-peer contract mechanism to discover and enforce alliances. Finally, we generalize our agent model to incorporate temporally-extended contracts, presenting opportunities for further work.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا