ﻻ يوجد ملخص باللغة العربية
Human culture is uniquely cumulative and open-ended. Using a computational model of cultural evolution in which neural network based agents evolve ideas for actions through invention and imitation, we tested the hypothesis that this is due to the capacity for recursive recall. We compared runs in which agents were limited to single-step actions to runs in which they used recursive recall to chain simple actions into complex ones. Chaining resulted in higher cultural diversity, open-ended generation of novelty, and no ceiling on the mean fitness of actions. Both chaining and no-chaining runs exhibited convergence on optimal actions, but without chaining this set was static while with chaining it was ever-changing. Chaining increased the ability to capitalize on the capacity for learning. These findings show that the recursive recall hypothesis provides a computationally plausible explanation of why humans alone have evolved the cultural means to transform this planet.
In this paper we consider the problem of finding the most probable set of events that could have led to a set of partial, noisy observations of some dynamical system. In particular, we consider the case of a dynamical system that is a (possibly stoch
EVOC (for EVOlution of Culture) is a computer model of culture that enables us to investigate how various factors such as barriers to cultural diffusion, the presence and choice of leaders, or changes in the ratio of innovation to imitation affect th
Agent-based modeling (ABM) is a powerful paradigm to gain insight into social phenomena. One area that ABM has rarely been applied is coalition formation. Traditionally, coalition formation is modeled using cooperative game theory. In this paper, a h
This paper describes a formalization of agent-based models (ABMs) as random walks on regular graphs and relates the symmetry group of those graphs to a coarse-graining of the ABM that is still Markovian. An ABM in which $N$ agents can be in $delta$ d
Following the remarkable success of the AlphaGO series, 2019 was a booming year that witnessed significant advances in multi-agent reinforcement learning (MARL) techniques. MARL corresponds to the learning problem in a multi-agent system in which mul