ترغب بنشر مسار تعليمي؟ اضغط هنا

Finding Core Members of Cooperative Games using Agent-Based Modeling

90   0   0.0 ( 0 )
 نشر من قبل Andrew Collins
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Agent-based modeling (ABM) is a powerful paradigm to gain insight into social phenomena. One area that ABM has rarely been applied is coalition formation. Traditionally, coalition formation is modeled using cooperative game theory. In this paper, a heuristic algorithm is developed that can be embedded into an ABM to allow the agents to find coalition. The resultant coalition structures are comparable to those found by cooperative game theory solution approaches, specifically, the core. A heuristic approach is required due to the computational complexity of finding a cooperative game theory solution which limits its application to about only a score of agents. The ABM paradigm provides a platform in which simple rules and interactions between agents can produce a macro-level effect without the large computational requirements. As such, it can be an effective means for approximating cooperative game solutions for large numbers of agents. Our heuristic algorithm combines agent-based modeling and cooperative game theory to help find agent partitions that are members of a games core solution. The accuracy of our heuristic algorithm can be determined by comparing its outcomes to the actual core solutions. This comparison achieved by developing an experiment that uses a specific example of a cooperative game called the glove game. The glove game is a type of exchange economy game. Finding the traditional cooperative game theory solutions is computationally intensive for large numbers of players because each possible partition must be compared to each possible coalition to determine the core set; hence our experiment only considers games of up to nine players. The results indicate that our heuristic approach achieves a core solution over 90% of the time for the games considered in our experiment.

قيم البحث

اقرأ أيضاً

517 - Liheng Chen , Hongyi Guo , Yali Du 2019
In many real-world problems, a team of agents need to collaborate to maximize the common reward. Although existing works formulate this problem into a centralized learning with decentralized execution framework, which avoids the non-stationary proble m in training, their decentralized execution paradigm limits the agents capability to coordinate. Inspired by the concept of correlated equilibrium, we propose to introduce a coordination signal to address this limitation, and theoretically show that following mild conditions, decentralized agents with the coordination signal can coordinate their individual policies as manipulated by a centralized controller. The idea of introducing coordination signal is to encapsulate coordinated strategies into the signals, and use the signals to instruct the collaboration in decentralized execution. To encourage agents to learn to exploit the coordination signal, we propose Signal Instructed Coordination (SIC), a novel coordination module that can be integrated with most existing MARL frameworks. SIC casts a common signal sampled from a pre-defined distribution to all agents, and introduces an information-theoretic regularization to facilitate the consistency between the observed signal and agents policies. Our experiments show that SIC consistently improves performance over well-recognized MARL models in both matrix games and a predator-prey game with high-dimensional strategy space.
Coordination is often critical to forming prosocial behaviors -- behaviors that increase the overall sum of rewards received by all agents in a multi-agent game. However, state of the art reinforcement learning algorithms often suffer from converging to socially less desirable equilibria when multiple equilibria exist. Previous works address this challenge with explicit reward shaping, which requires the strong assumption that agents can be forced to be prosocial. We propose using a less restrictive peer-rewarding mechanism, gifting, that guides the agents toward more socially desirable equilibria while allowing agents to remain selfish and decentralized. Gifting allows each agent to give some of their reward to other agents. We employ a theoretical framework that captures the benefit of gifting in converging to the prosocial equilibrium by characterizing the equilibrias basins of attraction in a dynamical system. With gifting, we demonstrate increased convergence of high risk, general-sum coordination games to the prosocial equilibrium both via numerical analysis and experiments.
281 - Chao Wen , Miao Xu , Zhilin Zhang 2021
In online advertising, auto-bidding has become an essential tool for advertisers to optimize their preferred ad performance metrics by simply expressing the high-level campaign objectives and constraints. Previous works consider the design of auto-bi dding agents from the single-agent view without modeling the mutual influence between agents. In this paper, we instead consider this problem from the perspective of a distributed multi-agent system, and propose a general Multi-Agent reinforcement learning framework for Auto-Bidding, namely MAAB, to learn the auto-bidding strategies. First, we investigate the competition and cooperation relation among auto-bidding agents, and propose temperature-regularized credit assignment for establishing a mixed cooperative-competitive paradigm. By carefully making a competition and cooperation trade-off among the agents, we can reach an equilibrium state that guarantees not only individual advertisers utility but also the system performance (social welfare). Second, due to the observed collusion behaviors of bidding low prices underlying the cooperation, we further propose bar agents to set a personalized bidding bar for each agent, and then to alleviate the degradation of revenue. Third, to deploy MAAB to the large-scale advertising system with millions of advertisers, we propose a mean-field approach. By grouping advertisers with the same objective as a mean auto-bidding agent, the interactions among advertisers are greatly simplified, making it practical to train MAAB efficiently. Extensive experiments on the offline industrial dataset and Alibaba advertising platform demonstrate that our approach outperforms several baseline methods in terms of social welfare and guarantees the ad platforms revenue.
Proximal Policy Optimization (PPO) is a popular on-policy reinforcement learning algorithm but is significantly less utilized than off-policy learning algorithms in multi-agent settings. This is often due the belief that on-policy methods are signifi cantly less sample efficient than their off-policy counterparts in multi-agent problems. In this work, we investigate Multi-Agent PPO (MAPPO), a variant of PPO which is specialized for multi-agent settings. Using a 1-GPU desktop, we show that MAPPO achieves surprisingly strong performance in three popular multi-agent testbeds: the particle-world environments, the Starcraft multi-agent challenge, and the Hanabi challenge, with minimal hyperparameter tuning and without any domain-specific algorithmic modifications or architectures. In the majority of environments, we find that compared to off-policy baselines, MAPPO achieves strong results while exhibiting comparable sample efficiency. Finally, through ablation studies, we present the implementation and algorithmic factors which are most influential to MAPPOs practical performance.
59 - Daniel Tang 2020
In this paper we consider the problem of finding the most probable set of events that could have led to a set of partial, noisy observations of some dynamical system. In particular, we consider the case of a dynamical system that is a (possibly stoch astic) time-stepping agent-based model with a discrete state space, the (possibly noisy) observations are the number of agents that have some given property and the events were interested in are the decisions made by the agents (their ``expressed behaviours) as the model evolves. We show that this problem can be reduced to an integer linear programming problem which can subsequently be solved numerically using a standard branch-and-cut algorithm. We describe two implementations, an ``offline algorithm that finds the maximum-a-posteriori expressed behaviours given a set of observations over a finite time window, and an ``online algorithm that incrementally builds a feasible set of behaviours from a stream of observations that may have no natural beginning or end. We demonstrate both algorithms on a spatial predator-prey model on a 32x32 grid with an initial population of 100 agents.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا