State-clustering method of payoff computation in repeated multiplayer games


Abstract in English

Direct reciprocity is a well-known mechanism that could explain how cooperation emerges and prevails in an evolving population. Numerous prior researches have studied the emergence of cooperation in multiplayer games. However, most of them use numerical or experimental methods, not theoretical analysis. This lack of theoretical works on the evolution of cooperation is due to the high complexity of calculating payoffs. In this paper, we propose a new method, namely, the state-clustering method to calculate the long-term payoffs in repeated games. Using this method, in an $n$-player repeated game, the computing complexity is reduced from $O(2^n)$ to $O(n^2)$, which makes it possible to compute a large-scale repeated games payoff. We explore the evolution of cooperation in both infinitely and finitely repeated public goods games as an example to show the effectiveness of our method. In both cases, we find that when the synergy factor is sufficiently large, the increasing number of participants in a game is detrimental to the evolution of cooperation. Our work provides a theoretical approach to study the evolution of cooperation in repeated multiplayer games.

Download