No Arabic abstract
We study adaptive learning in a typical p-player game. The payoffs of the games are randomly generated and then held fixed. The strategies of the players evolve through time as the players learn. The trajectories in the strategy space display a range of qualitatively different behaviors, with attractors that include unique fixed points, multiple fixed points, limit cycles and chaos. In the limit where the game is complicated, in the sense that the players can take many possible actions, we use a generating-functional approach to establish the parameter range in which learning dynamics converge to a stable fixed point. The size of this region goes to zero as the number of players goes to infinity, suggesting that complex non-equilibrium behavior, exemplified by chaos, may be the norm for complicated games with many players.
We discuss similarities and differencies between systems of many interacting players maximizing their individual payoffs and particles minimizing their interaction energy. We analyze long-run behavior of stochastic dynamics of many interacting agents in spatial and adaptive population games. We review results concerning the effect of the number of players and the noise level on the stochastic stability of Nash equilibria. In particular, we present examples of games in which when the number of players or the noise level increases, a population undergoes a transition between its equilibria.
As the human brain develops, it increasingly supports coordinated control of neural activity. The mechanism by which white matter evolves to support this coordination is not well understood. We use a network representation of diffusion imaging data from 882 youth ages 8 to 22 to show that white matter connectivity becomes increasingly optimized for a diverse range of predicted dynamics in development. Notably, stable controllers in subcortical areas are negatively related to cognitive performance. Investigating structural mechanisms supporting these changes, we simulate network evolution with a set of growth rules. We find that all brain networks are structured in a manner highly optimized for network control, with distinct control mechanisms predicted in child versus older youth. We demonstrate that our results cannot be simply explained by changes in network modularity. This work reveals a possible mechanism of human brain development that preferentially optimizes dynamic network control over static network architecture.
We prove that every repeated game with countably many players, finite action sets, and tail-measurable payoffs admits an $epsilon$-equilibrium, for every $epsilon > 0$.
The processes and mechanisms underlying the origin and maintenance of biological diversity have long been of central importance in ecology and evolution. The competitive exclusion principle states that the number of coexisting species is limited by the number of resources, or by the species similarity in resource use. Natural systems such as the extreme diversity of unicellular life in the oceans provide counter examples. It is known that mathematical models incorporating population fluctuations can lead to violations of the exclusion principle. Here we use simple eco-evolutionary models to show that a certain type of population dynamics, boom-bust dynamics, can allow for the evolution of much larger amounts of diversity than would be expected with stable equilibrium dynamics. Boom-bust dynamics are characterized by long periods of almost exponential growth (boom) and a subsequent population crash due to competition (bust). When such ecological dynamics are incorporated into an evolutionary model that allows for adaptive diversification in continuous phenotype spaces, desynchronization of the boom-bust cycles of coexisting species can lead to the maintenance of high levels of diversity.
We consider two neuronal networks coupled by long-range excitatory interactions. Oscillations in the gamma frequency band are generated within each network by local inhibition. When long-range excitation is weak, these oscillations phase-lock with a phase-shift dependent on the strength of local inhibition. Increasing the strength of long-range excitation induces a transition to chaos via period-doubling or quasi-periodic scenarios. In the chaotic regime oscillatory activity undergoes fast temporal decorrelation. The generality of these dynamical properties is assessed in firing-rate models as well as in large networks of conductance-based neurons.