ترغب بنشر مسار تعليمي؟ اضغط هنا

State-clustering method of payoff computation in repeated multiplayer games

280   0   0.0 ( 0 )
 نشر من قبل Fang Chen
 تاريخ النشر 2021
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Direct reciprocity is a well-known mechanism that could explain how cooperation emerges and prevails in an evolving population. Numerous prior researches have studied the emergence of cooperation in multiplayer games. However, most of them use numerical or experimental methods, not theoretical analysis. This lack of theoretical works on the evolution of cooperation is due to the high complexity of calculating payoffs. In this paper, we propose a new method, namely, the state-clustering method to calculate the long-term payoffs in repeated games. Using this method, in an $n$-player repeated game, the computing complexity is reduced from $O(2^n)$ to $O(n^2)$, which makes it possible to compute a large-scale repeated games payoff. We explore the evolution of cooperation in both infinitely and finitely repeated public goods games as an example to show the effectiveness of our method. In both cases, we find that when the synergy factor is sufficiently large, the increasing number of participants in a game is detrimental to the evolution of cooperation. Our work provides a theoretical approach to study the evolution of cooperation in repeated multiplayer games.

قيم البحث

اقرأ أيضاً

423 - Fang Chen , Te Wu , Long Wang 2021
Since Press and Dysons ingenious discovery of ZD (zero-determinant) strategy in the repeated Prisoners Dilemma game, several studies have confirmed the existence of ZD strategy in repeated multiplayer social dilemmas. However, few researches study th e evolutionary performance of multiplayer ZD strategies, especially from a theoretical perspective. Here, we use a newly proposed state-clustering method to theoretically analyze the evolutionary dynamics of two representative ZD strategies: generous ZD strategies and extortionate ZD strategies. Apart from the competitions between the two strategies and some classical strategies, we consider two new settings for multiplayer ZD strategies: competitions in the whole ZD strategy space and competitions in the space of all memory-1 strategies. Besides, we investigate the influence of level of generosity and extortion on the evolutionary dynamics of generous and extortionate ZD, which was commonly ignored in previous studies. Theoretical results show players with limited generosity are at an advantageous place and extortioners extorting more severely hold their ground more readily. Our results may provide new insights into better understanding the evolutionary dynamics of ZD strategies in repeated multiplayer games.
In repeated interactions between individuals, we do not expect that exactly the same situation will occur from one time to another. Contrary to what is common in models of repeated games in the literature, most real situations may differ a lot and th ey are seldom completely symmetric. The purpose of this paper is to discuss a simple model of cognitive processing in the context of a repeated interaction with varying payoffs. The interaction between players is modelled by a repeated game with random observable payoffs. Cooperation is not simply associated with a certain action but needs to be understood as a phenomenon of the behaviour in the repeated game. The players are thus faced with a more complex situation, compared to the Prisoners Dilemma that has been widely used for investigating the conditions for cooperation in evolving populations. Still, there are robust cooperating strategies that usually evolve in a population of players. In the cooperative mode, these strategies select an action that allows for maximizing the sum of the payoff of the two players in each round, regardless of the own payoff. Two such players maximise the expected total long-term payoff. If the opponent deviates from this scheme, the strategy invokes a punishment action, which aims at lowering the opponents score for the rest of the (possibly infinitely) repeated game. The introduction of mistakes to the game actually pushes evolution towards more cooperative strategies even though the game becomes more difficult.
Multiplayer games have long been used as testbeds in artificial intelligence research, aptly referred to as the Drosophila of artificial intelligence. Traditionally, researchers have focused on using well-known games to build strong agents. This prog ress, however, can be better informed by characterizing games and their topological landscape. Tackling this latter question can facilitate understanding of agents and help determine what game an agent should target next as part of its training. Here, we show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games, quantifying relationships between games of varying sizes and characteristics. We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another. Our results culminate in a demonstration leveraging this information to generate new and interesting games, including mixtures of empirical games synthesized from real world games.
103 - A. Iqbal , A.H. Toor 2002
In a two-stage repeated classical game of prisoners dilemma the knowledge that both players will defect in the second stage makes the players to defect in the first stage as well. We find a quantum version of this repeated game where the players deci de to cooperate in the first stage while knowing that both will defect in the second.
The notion of emph{policy regret} in online learning is a well defined? performance measure for the common scenario of adaptive adversaries, which more traditional quantities such as external regret do not take into account. We revisit the notion of policy regret and first show that there are online learning settings in which policy regret and external regret are incompatible: any sequence of play that achieves a favorable regret with respect to one definition must do poorly with respect to the other. We then focus on the game-theoretic setting where the adversary is a self-interested agent. In that setting, we show that external regret and policy regret are not in conflict and, in fact, that a wide class of algorithms can ensure a favorable regret with respect to both definitions, so long as the adversary is also using such an algorithm. We also show that the sequence of play of no-policy regret algorithms converges to a emph{policy equilibrium}, a new notion of equilibrium that we introduce. Relating this back to external regret, we show that coarse correlated equilibria, which no-external regret players converge to, are a strict subset of policy equilibria. Thus, in game-theoretic settings, every sequence of play with no external regret also admits no policy regret, but the converse does not hold.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا