ﻻ يوجد ملخص باللغة العربية
We study minority games in efficient regime. By incorporating the utility function and aggregating agents with similar strategies we develop an effective mesoscale notion of state of the game. Using this approach, the game can be represented as a Markov process with substantially reduced number of states with explicitly computable probabilities. For any payoff, the finiteness of the number of states is proved. Interesting features of an extensive random variable, called aggregated demand, viz. its strong inhomogeneity and presence of patterns in time, can be easily interpreted. Using Markov theory and quenched disorder approach, we can explain important macroscopic characteristics of the game: behavior of variance per capita and predictability of the aggregated demand. We prove that in case of linear payoff many attractors in the state space are possible.
We present a comprehensive study of utility function of the minority game in its efficient regime. We develop an effective description of state of the game. For the payoff function $g(x)=sgn (x)$ we explicitly represent the game as the Markov process
What is the physical origin of player cooperation in minority game? And how to obtain maximum global wealth in minority game? We answer the above questions by studying a variant of minority game from which players choose among $N_c$ alternatives acco
The recent COVID-19 pandemic has led to an increasing interest in the modeling and analysis of infectious diseases. The pandemic has made a significant impact on the way we behave and interact in our daily life. The past year has witnessed a strong i
Animals live in groups to defend against predation and to obtain food. However, for some animals --- especially ones that spend long periods of time feeding --- there are costs if a group chooses to move on before their nutritional needs are satisfie
The existence of a phase transition with diverging susceptibility in batch Minority Games (MGs) is the mark of informationally efficient regimes and is linked to the specifics of the agents learning rules. Here we study how the standard scenario is a