No Arabic abstract
Mean field games with a major player were introduced in (Huang, 2010) within a linear-quadratic (LQ) modeling framework. Due to the rich structure of major-minor player models, the past ten years have seen significant research efforts for different solution notions and analytical techniques. For LQ models, we address the relation between three solution frameworks: the Nash certainty equivalence (NCE) approach in (Huang, 2010), master equations, and asymptotic solvability, which have been developed starting with different ideas. We establish their equivalence relationships.
This paper studies an asymptotic solvability problem for linear quadratic (LQ) mean field games with controlled diffusions and indefinite weights for the state and control in the costs. We employ a rescaling approach to derive a low dimensional Riccati ordinary differential equation (ODE) system, which characterizes a necessary and sufficient condition for asymptotic solvability. The rescaling technique is further used for performance estimates, establishing an $O(1/N)$-Nash equilibrium for the obtained decentralized strategies.
For a mean field game model with a major and infinite minor players, we characterize a notion of Nash equilibrium via a system of so-called master equations, namely a system of nonlinear transport equations in the space of measures. Then, for games with a finite number N of minor players and a major player, we prove that the solution of the corresponding Nash system converges to the solution of the system of master equations as N tends to infinity.
This note is concerned with a modeling question arising from the mean field games theory. We show how to model mean field games involving a major player which has a strategic advantage, while only allowing closed loop markovian strategies for all the players. We illustrate this property through three examples.
We consider a general-sum N-player linear-quadratic game with stochastic dynamics over a finite horizon and prove the global convergence of the natural policy gradient method to the Nash equilibrium. In order to prove the convergence of the method, we require a certain amount of noise in the system. We give a condition, essentially a lower bound on the covariance of the noise in terms of the model parameters, in order to guarantee convergence. We illustrate our results with numerical experiments to show that even in situations where the policy gradient method may not converge in the deterministic setting, the addition of noise leads to convergence.
We study the asymptotic organization among many optimizing individuals interacting in a suitable moderate way. We justify this limiting game by proving that its solution provides approximate Nash equilibria for large but finite player games. This proof depends upon the derivation of a law of large numbers for the empirical processes in the limit as the number of players tends to infinity. Because it is of independent interest, we prove this result in full detail. We characterize the solutions of the limiting game via a verification argument.