ﻻ يوجد ملخص باللغة العربية
Agent-based modeling (ABM) is a powerful paradigm to gain insight into social phenomena. One area that ABM has rarely been applied is coalition formation. Traditionally, coalition formation is modeled using cooperative game theory. In this paper, a heuristic algorithm is developed that can be embedded into an ABM to allow the agents to find coalition. The resultant coalition structures are comparable to those found by cooperative game theory solution approaches, specifically, the core. A heuristic approach is required due to the computational complexity of finding a cooperative game theory solution which limits its application to about only a score of agents. The ABM paradigm provides a platform in which simple rules and interactions between agents can produce a macro-level effect without the large computational requirements. As such, it can be an effective means for approximating cooperative game solutions for large numbers of agents. Our heuristic algorithm combines agent-based modeling and cooperative game theory to help find agent partitions that are members of a games core solution. The accuracy of our heuristic algorithm can be determined by comparing its outcomes to the actual core solutions. This comparison achieved by developing an experiment that uses a specific example of a cooperative game called the glove game. The glove game is a type of exchange economy game. Finding the traditional cooperative game theory solutions is computationally intensive for large numbers of players because each possible partition must be compared to each possible coalition to determine the core set; hence our experiment only considers games of up to nine players. The results indicate that our heuristic approach achieves a core solution over 90% of the time for the games considered in our experiment.
In many real-world problems, a team of agents need to collaborate to maximize the common reward. Although existing works formulate this problem into a centralized learning with decentralized execution framework, which avoids the non-stationary proble
Coordination is often critical to forming prosocial behaviors -- behaviors that increase the overall sum of rewards received by all agents in a multi-agent game. However, state of the art reinforcement learning algorithms often suffer from converging
In online advertising, auto-bidding has become an essential tool for advertisers to optimize their preferred ad performance metrics by simply expressing the high-level campaign objectives and constraints. Previous works consider the design of auto-bi
Proximal Policy Optimization (PPO) is a popular on-policy reinforcement learning algorithm but is significantly less utilized than off-policy learning algorithms in multi-agent settings. This is often due the belief that on-policy methods are signifi
In this paper we consider the problem of finding the most probable set of events that could have led to a set of partial, noisy observations of some dynamical system. In particular, we consider the case of a dynamical system that is a (possibly stoch