ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning to Win, Lose and Cooperate through Reward Signal Evolution

67   0   0.0 ( 0 )
 نشر من قبل Rafa{\\l} Muszy\\'nski
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Solving a reinforcement learning problem typically involves correctly prespecifying the reward signal from which the algorithm learns. Here, we approach the problem of reward signal design by using an evolutionary approach to perform a search on the space of all possible reward signals. We introduce a general framework for optimizing $N$ goals given $n$ reward signals. Through experiments we demonstrate that such an approach allows agents to learn high-level goals - such as winning, losing and cooperating - from scratch without prespecified reward signals in the game of Pong. Some of the solutions found by the algorithm are surprising, in the sense that they would probably not have been chosen by a person trying to hand-code a given behaviour through a specific reward signal. Furthermore, it seems that the proposed approach may also benefit from higher stability of the training performance when compared with the typical score-based reward signals.



قيم البحث

اقرأ أيضاً

Many real-world scenarios involve teams of agents that have to coordinate their actions to reach a shared goal. We focus on the setting in which a team of agents faces an opponent in a zero-sum, imperfect-information game. Team members can coordinate their strategies before the beginning of the game, but are unable to communicate during the playing phase of the game. This is the case, for example, in Bridge, collusion in poker, and collusion in bidding. In this setting, model-free RL methods are oftentimes unable to capture coordination because agents policies are executed in a decentralized fashion. Our first contribution is a game-theoretic centralized training regimen to effectively perform trajectory sampling so as to foster team coordination. When team members can observe each other actions, we show that this approach provably yields equilibrium strategies. Then, we introduce a signaling-based framework to represent team coordinated strategies given a buffer of past experiences. Each team members policy is parametrized as a neural network whose output is conditioned on a suitable exogenous signal, drawn from a learned probability distribution. By combining these two elements, we empirically show convergence to coordinated equilibria in cases where previous state-of-the-art multi-agent RL algorithms did not.
517 - Liheng Chen , Hongyi Guo , Yali Du 2019
In many real-world problems, a team of agents need to collaborate to maximize the common reward. Although existing works formulate this problem into a centralized learning with decentralized execution framework, which avoids the non-stationary proble m in training, their decentralized execution paradigm limits the agents capability to coordinate. Inspired by the concept of correlated equilibrium, we propose to introduce a coordination signal to address this limitation, and theoretically show that following mild conditions, decentralized agents with the coordination signal can coordinate their individual policies as manipulated by a centralized controller. The idea of introducing coordination signal is to encapsulate coordinated strategies into the signals, and use the signals to instruct the collaboration in decentralized execution. To encourage agents to learn to exploit the coordination signal, we propose Signal Instructed Coordination (SIC), a novel coordination module that can be integrated with most existing MARL frameworks. SIC casts a common signal sampled from a pre-defined distribution to all agents, and introduces an information-theoretic regularization to facilitate the consistency between the observed signal and agents policies. Our experiments show that SIC consistently improves performance over well-recognized MARL models in both matrix games and a predator-prey game with high-dimensional strategy space.
In recent years, Win-Stay-Lose-Learn rule has attracted wide attention as an effective strategy updating rule, and voluntary participation is proposed by introducing a third strategy in Prisoners dilemma game. Some researches show that combining Win- Stay-Lose-Learn rule with voluntary participation could promote cooperation more significantly under moderate temptation values, however, cooperators survival under high aspiration levels and high temptation values is still a challenging problem. In this paper, inspired by Achievement Motivation Theory, a Dynamic-Win-Stay-Lose-Learn rule with voluntary participation is investigated, where a dynamic aspiration process is introduced to describe the co-evolution of individuals strategies and aspirations. It is found that cooperation is extremely promoted and defection is almost extinct in our model, even when the initial aspiration levels and temptation values are high. The combination of dynamic aspiration and voluntary participation plays an active role since loners could survive under high initial aspiration levels and they will expand stably because of their fixed payoffs. The robustness of our model is also discussed and some adverse structures are found which should be alerted in the evolutionary process. Our work provides a more realistic model and shows that cooperators may prevail defectors in an unfavorable initial environment.
463 - Minjae Kim , Jung-Kyoo Choi , 2021
Evolutionary game theory assumes that players replicate a highly scored players strategy through genetic inheritance. However, when learning occurs culturally, it is often difficult to recognize someones strategy just by observing the behaviour. In t his work, we consider players with memory-one stochastic strategies in the iterated prisoners dilemma, with an assumption that they cannot directly access each others strategy but only observe the actual moves for a certain number of rounds. Based on the observation, the observer has to infer the resident strategy in a Bayesian way and chooses his or her own strategy accordingly. By examining the best-response relations, we argue that players can escape from full defection into a cooperative equilibrium supported by Win-Stay-Lose-Shift in a self-confirming manner, provided that the cost of cooperation is low and the observational learning supplies sufficiently large uncertainty.
Prisoners dilemma game is the most commonly used model of spatial evolutionary game which is considered as a paradigm to portray competition among selfish individuals. In recent years, Win-Stay-Lose-Learn, a strategy updating rule base on aspiration, has been proved to be an effective model to promote cooperation in spatial prisoners dilemma game, which leads aspiration to receive lots of attention. But in many research the assumption that individuals aspiration is fixed is inconsistent with recent results from psychology. In this paper, according to Expected Value Theory and Achievement Motivation Theory, we propose a dynamic aspiration model based on Win-Stay-Lose-Learn rule in which individuals aspiration is inspired by its payoff. It is found that dynamic aspiration has a significant impact on the evolution process, and different initial aspirations lead to different results, which are called Stable Coexistence under Low Aspiration, Dependent Coexistence under Moderate aspiration and Defection Explosion under High Aspiration respectively. Furthermore, a deep analysis is performed on the local structures which cause cooperators existence or defectors expansion, and the evolution process for different parameters including strategy and aspiration. As a result, the intrinsic structures leading to defectors expansion and cooperators survival are achieved for different evolution process, which provides a penetrating understanding of the evolution. Compared to fixed aspiration model, dynamic aspiration introduces a more satisfactory explanation on population evolution laws and can promote deeper comprehension for the principle of prisoners dilemma.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا