No Arabic abstract
We present an example of symmetric ergodic $N$-players differential games, played in memory strategies on the position of the players, for which the limit set, as $Nto +infty$, of Nash equilibrium payoffs is large, although the game has a single mean field game equilibrium. This example is in sharp contrast with a result by Lacker [23] for finite horizon problems.
The goal of this paper is to study the long time behavior of solutions of the first-order mean field game (MFG) systems with a control on the acceleration. The main issue for this is the lack of small time controllability of the problem, which prevents to define the associated ergodic mean field game problem in the standard way. To overcome this issue, we first study the long-time average of optimal control problems with control on the acceleration: we prove that the time average of the value function converges to an ergodic constant and represent this ergodic constant as a minimum of a Lagrangian over a suitable class of closed probability measure. This characterization leads us to define the ergodic MFG problem as a fixed-point problem on the set of closed probability measures. Then we also show that this MFG ergodic problem has at least one solution, that the associated ergodic constant is unique under the standard mono-tonicity assumption and that the time-average of the value function of the time-dependent MFG problem with control of acceleration converges to this ergodic constant.
Mean field games are concerned with the limit of large-population stochastic differential games where the agents interact through their empirical distribution. In the classical setting, the number of players is large but fixed throughout the game. However, in various applications, such as population dynamics or economic growth, the number of players can vary across time which may lead to different Nash equilibria. For this reason, we introduce a branching mechanism in the population of agents and obtain a variation on the mean field game problem. As a first step, we study a simple model using a PDE approach to illustrate the main differences with the classical setting. We prove existence of a solution and show that it provides an approximate Nash-equilibrium for large population games. We also present a numerical example for a linear--quadratic model. Then we study the problem in a general setting by a probabilistic approach. It is based upon the relaxed formulation of stochastic control problems which allows us to obtain a general existence result.
We propose and investigate a general class of discrete time and finite state space mean field game (MFG) problems with potential structure. Our model incorporates interactions through a congestion term and a price variable. It also allows hard constraints on the distribution of the agents. We analyze the connection between the MFG problem and two optimal control problems in duality. We present two families of numerical methods and detail their implementation: (i) primal-dual proximal methods (and their extension with nonlinear proximity operators), (ii) the alternating direction method of multipliers (ADMM) and a variant called ADM-G. We give some convergence results. Numerical results are provided for two examples with hard constraints.
We study the asymptotic organization among many optimizing individuals interacting in a suitable moderate way. We justify this limiting game by proving that its solution provides approximate Nash equilibria for large but finite player games. This proof depends upon the derivation of a law of large numbers for the empirical processes in the limit as the number of players tends to infinity. Because it is of independent interest, we prove this result in full detail. We characterize the solutions of the limiting game via a verification argument.
This paper proposes an efficient computational framework for longitudinal velocity control of a large number of autonomous vehicles (AVs) and develops a traffic flow theory for AVs. Instead of hypothesizing explicitly how AVs drive, our goal is to design future AVs as rational, utility-optimizing agents that continuously select optimal velocity over a period of planning horizon. With a large number of interacting AVs, this design problem can become computationally intractable. This paper aims to tackle such a challenge by employing mean field approximation and deriving a mean field game (MFG) as the limiting differential game with an infinite number of agents. The proposed micro-macro model allows one to define individuals on a microscopic level as utility-optimizing agents while translating rich microscopic behaviors to macroscopic models. Different from existing studies on the application of MFG to traffic flow models, the present study offers a systematic framework to apply MFG to autonomous vehicle velocity control. The MFG-based AV controller is shown to mitigate traffic jam faster than the LWR-based controller. MFG also embodies classical traffic flow models with behavioral interpretation, thereby providing a new traffic flow theory for AVs.