ﻻ يوجد ملخص باللغة العربية
We present a comprehensive study of utility function of the minority game in its efficient regime. We develop an effective description of state of the game. For the payoff function $g(x)=sgn (x)$ we explicitly represent the game as the Markov process and prove the finitness of number of states. We also demonstrate boundedness of the utility function. Using these facts we can explain all interesting observable features of the aggregated demand: appearance of strong fluctuations, their periodicity and existence of prefered levels. For another payoff, $g(x)=x$, the number of states is still finite and utility remains bounded but the number of states cannot be reduced and probabilities of states are not calculated. However, using properties of the utility and analysing the game in terms of de Bruijn graphs, we can also explain distinct peaks of demand and their frequencies.
We study a variation of the minority game. There are N agents. Each has to choose between one of two alternatives everyday, and there is reward to each member of the smaller group. The agents cannot communicate with each other, but try to guess the c
Generalization of the minority game to more than one market is considered. At each time step every agent chooses one of its strategies and acts on the market related to this strategy. If the payoff function allows for strong fluctuation of utility th
The existence of a phase transition with diverging susceptibility in batch Minority Games (MGs) is the mark of informationally efficient regimes and is linked to the specifics of the agents learning rules. Here we study how the standard scenario is a
We study minority games in efficient regime. By incorporating the utility function and aggregating agents with similar strategies we develop an effective mesoscale notion of state of the game. Using this approach, the game can be represented as a Mar
We study adaptive learning in a typical p-player game. The payoffs of the games are randomly generated and then held fixed. The strategies of the players evolve through time as the players learn. The trajectories in the strategy space display a range