No Arabic abstract
We present a comprehensive study of utility function of the minority game in its efficient regime. We develop an effective description of state of the game. For the payoff function $g(x)=sgn (x)$ we explicitly represent the game as the Markov process and prove the finitness of number of states. We also demonstrate boundedness of the utility function. Using these facts we can explain all interesting observable features of the aggregated demand: appearance of strong fluctuations, their periodicity and existence of prefered levels. For another payoff, $g(x)=x$, the number of states is still finite and utility remains bounded but the number of states cannot be reduced and probabilities of states are not calculated. However, using properties of the utility and analysing the game in terms of de Bruijn graphs, we can also explain distinct peaks of demand and their frequencies.
We study a variation of the minority game. There are N agents. Each has to choose between one of two alternatives everyday, and there is reward to each member of the smaller group. The agents cannot communicate with each other, but try to guess the choice others will make, based only the past history of number of people choosing the two alternatives. We describe a simple probabilistic strategy using which the agents acting independently, can still maximize the average number of people benefitting every day. The strategy leads to a very efficient utilization of resources, and the average deviation from the maximum possible can be made of order $(N^{epsilon})$, for any $epsilon >0$. We also show that a single agent does not expect to gain by not following the strategy.
Generalization of the minority game to more than one market is considered. At each time step every agent chooses one of its strategies and acts on the market related to this strategy. If the payoff function allows for strong fluctuation of utility then market occupancies become inhomogeneous with preference given to this market where the fluctuation occured first. There exists a critical size of agent population above which agents on bigger market behave collectively. In this regime there always exists a history of decisions for which all agents on a bigger market react identically.
The existence of a phase transition with diverging susceptibility in batch Minority Games (MGs) is the mark of informationally efficient regimes and is linked to the specifics of the agents learning rules. Here we study how the standard scenario is affected in a mixed population game in which agents with the `optimal learning rule (i.e. the one leading to efficiency) coexist with ones whose adaptive dynamics is sub-optimal. Our generic finding is that any non-vanishing intensive fraction of optimal agents guarantees the existence of an efficient phase. Specifically, we calculate the dependence of the critical point on the fraction $q$ of `optimal agents focusing our analysis on three cases: MGs with market impact correction, grand-canonical MGs and MGs with heterogeneous comfort levels.
We study minority games in efficient regime. By incorporating the utility function and aggregating agents with similar strategies we develop an effective mesoscale notion of state of the game. Using this approach, the game can be represented as a Markov process with substantially reduced number of states with explicitly computable probabilities. For any payoff, the finiteness of the number of states is proved. Interesting features of an extensive random variable, called aggregated demand, viz. its strong inhomogeneity and presence of patterns in time, can be easily interpreted. Using Markov theory and quenched disorder approach, we can explain important macroscopic characteristics of the game: behavior of variance per capita and predictability of the aggregated demand. We prove that in case of linear payoff many attractors in the state space are possible.
We study adaptive learning in a typical p-player game. The payoffs of the games are randomly generated and then held fixed. The strategies of the players evolve through time as the players learn. The trajectories in the strategy space display a range of qualitatively different behaviors, with attractors that include unique fixed points, multiple fixed points, limit cycles and chaos. In the limit where the game is complicated, in the sense that the players can take many possible actions, we use a generating-functional approach to establish the parameter range in which learning dynamics converge to a stable fixed point. The size of this region goes to zero as the number of players goes to infinity, suggesting that complex non-equilibrium behavior, exemplified by chaos, may be the norm for complicated games with many players.