Do you want to publish a course? Click here

Evolutionary Stable Strategies in Games with Fuzzy Payoffs

245   0   0.0 ( 0 )
 Added by Haozhen Situ
 Publication date 2015
and research's language is English
 Authors Haozhen Situ




Ask ChatGPT about the research

Evolutionarily stable strategy (ESS) is a key concept in evolutionary game theory. ESS provides an evolutionary stability criterion for biological, social and economical behaviors. In this paper, we develop a new approach to evaluate ESS in symmetric two player games with fuzzy payoffs. Particularly, every strategy is assigned a fuzzy membership that describes to what degree it is an ESS in presence of uncertainty. The fuzzy set of ESS characterize the nature of ESS. The proposed approach avoids loss of any information that happens by the defuzzification method in games and handles uncertainty of payoffs through all steps of finding an ESS. We use the satisfaction function to compare fuzzy payoffs, and adopts the fuzzy decision rule to obtain the membership function of the fuzzy set of ESS. The theorem shows the relation between fuzzy ESS and fuzzy Nash quilibrium. The numerical results illustrate the proposed method is an appropriate generalization of ESS to fuzzy payoff games.



rate research

Read More

Evolutionary game theory is used to model the evolution of competing strategies in a population of players. Evolutionary stability of a strategy is a dynamic equilibrium, in which any competing mutated strategy would be wiped out from a population. If a strategy is weak evolutionarily stable, the competing strategy may manage to survive within the network. Understanding the network-related factors that affect the evolutionary stability of a strategy would be critical in making accurate predictions about the behaviour of a strategy in a real-world strategic decision making environment. In this work, we evaluate the effect of network topology on the evolutionary stability of a strategy. We focus on two well-known strategies known as the Zero-determinant strategy and the Pavlov strategy. Zero-determinant strategies have been shown to be evolutionarily unstable in a well-mixed population of players. We identify that the Zero-determinant strategy may survive, and may even dominate in a population of players connected through a non-homogeneous network. We introduce the concept of `topological stability to denote this phenomenon. We argue that not only the network topology, but also the evolutionary process applied and the initial distribution of strategies are critical in determining the evolutionary stability of strategies. Further, we observe that topological stability could affect other well-known strategies as well, such as the general cooperator strategy and the cooperator strategy. Our observations suggest that the variation of evolutionary stability due to topological stability of strategies may be more prevalent in the social context of strategic evolution, in comparison to the biological context.
261 - Hugo Gimbert 2013
We prove that optimal strategies exist in every perfect-information stochastic game with finitely many states and actions and a tail winning condition.
118 - Hugo Gimbert 2010
We examine perfect information stochastic mean-payoff games - a class of games containing as special sub-classes the usual mean-payoff games and parity games. We show that deterministic memoryless strategies that are optimal for discounted games with state-dependent discount factors close to 1 are optimal for priority mean-payoff games establishing a strong link between these two classes.
Regret has been established as a foundational concept in online learning, and likewise has important applications in the analysis of learning dynamics in games. Regret quantifies the difference between a learners performance against a baseline in hindsight. It is well-known that regret-minimizing algorithms converge to certain classes of equilibria in games; however, traditional forms of regret used in game theory predominantly consider baselines that permit deviations to deterministic actions or strategies. In this paper, we revisit our understanding of regret from the perspective of deviations over partitions of the full emph{mixed} strategy space (i.e., probability distributions over pure strategies), under the lens of the previously-established $Phi$-regret framework, which provides a continuum of stronger regret measures. Importantly, $Phi$-regret enables learning agents to consider deviations from and to mixed strategies, generalizing several existing notions of regret such as external, internal, and swap regret, and thus broadening the insights gained from regret-based analysis of learning algorithms. We prove here that the well-studied evolutionary learning algorithm of replicator dynamics (RD) seamlessly minimizes the strongest possible form of $Phi$-regret in generic $2 times 2$ games, without any modification of the underlying algorithm itself. We subsequently conduct experiments validating our theoretical results in a suite of 144 $2 times 2$ games wherein RD exhibits a diverse set of behaviors. We conclude by providing empirical evidence of $Phi$-regret minimization by RD in some larger games, hinting at further opportunity for $Phi$-regret based study of such algorithms from both a theoretical and empirical perspective.
181 - Dong Hao , Zhihai Rong , Tao Zhou 2014
Repeated game theory has been one of the most prevailing tools for understanding the long-run relationships, which are footstones in building human society. Recent works have revealed a new set of zero-determinant (ZD) strategies, which is an important advance in repeated games. A ZD strategy player can exert a unilaterally control on two players payoffs. In particular he can deterministically set the opponents payoff, or enforce an unfair linear relationship between the players payoffs, thereby always seizing an advantageous share of payoffs. One of the limitations of the original ZD strategy, however, is that it does not capture the notion of robustness when the game is subjected to stochastic errors. In this paper, we propose a general model of ZD strategies for noisy repeated games, and find that ZD strategies have high robustness against errors. We further derive the pinning strategy under noise, by which the ZD strategy player coercively set the opponents expected payoff to his desired level, although his payoff control ability declines with the increase of noise strength. Due to the uncertainty caused by noise, the ZD strategy player cannot secure his payoff to be higher than the opponents, which implies strong extortions do not exist even under low noise. While we show that the ZD strategy player can still establish a novel kind of extortions, named weak extortions, where any increase of his own payoff always exceeds that of the opponents by a fixed percentage, and the conditions under which the weak extortions can be realized are more stringent as the noise becomes stronger.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا