No Arabic abstract
The aim of this paper is to study the long time behavior of solutions to deterministic mean field games systems on Euclidean space. This problem was addressed on the torus ${mathbb T}^n$ in [P. Cardaliaguet, {it Long time average of first order mean field games and weak KAM theory}, Dyn. Games Appl. 3 (2013), 473-488], where solutions are shown to converge to the solution of a certain ergodic mean field games system on ${mathbb T}^n$. By adapting the approach in [A. Fathi, E. Maderna, {it Weak KAM theorem on non compact manifolds}, NoDEA Nonlinear Differential Equations Appl. 14 (2007), 1-27], we identify structural conditions on the Lagrangian, under which the corresponding ergodic system can be solved in $mathbb{R}^{n}$. Then we show that time dependent solutions converge to the solution of such a stationary system on all compact subsets of the whole space.
The goal of this paper is to study the long time behavior of solutions of the first-order mean field game (MFG) systems with a control on the acceleration. The main issue for this is the lack of small time controllability of the problem, which prevents to define the associated ergodic mean field game problem in the standard way. To overcome this issue, we first study the long-time average of optimal control problems with control on the acceleration: we prove that the time average of the value function converges to an ergodic constant and represent this ergodic constant as a minimum of a Lagrangian over a suitable class of closed probability measure. This characterization leads us to define the ergodic MFG problem as a fixed-point problem on the set of closed probability measures. Then we also show that this MFG ergodic problem has at least one solution, that the associated ergodic constant is unique under the standard mono-tonicity assumption and that the time-average of the value function of the time-dependent MFG problem with control of acceleration converges to this ergodic constant.
We study first order evolutive Mean Field Games where the Hamiltonian is non-coercive. This situation occurs, for instance, when some directions are forbidden to the generic player at some points. We establish the existence of a weak solution of the system via a vanishing viscosity method and, mainly, we prove that the evolution of the populations density is the push-forward of the initial density through the flow characterized almost everywhere by the optimal trajectories of the control problem underlying the Hamilton-Jacobi equation. As preliminary steps, we need that the optimal trajectories for the control problem are unique (at least for a.e. starting points) and that the optimal controls can be expressed in terms of the horizontal gradient of the value function.
We study a numerical approximation of a time-dependent Mean Field Game (MFG) system with local couplings. The discretization we consider stems from a variational approach described in [Briceno-Arias, Kalise, and Silva, SIAM J. Control Optim., 2017] for the stationary problem and leads to the finite difference scheme introduced by Achdou and Capuzzo-Dolcetta in [SIAM J. Numer. Anal., 48(3):1136-1162, 2010]. In order to solve the finite dimensional variational problems, in [Briceno-Arias, Kalise, and Silva, SIAM J. Control Optim., 2017] the authors implement the primal-dual algorithm introduced by Chambolle and Pock in [J. Math. Imaging Vision, 40(1):120-145, 2011], whose core consists in iteratively solving linear systems and applying a proximity operator. We apply that method to time-dependent MFG and, for large viscosity parameters, we improve the linear system solution by replacing the direct approach used in [Briceno-Arias, Kalise, and Silva, SIAM J. Control Optim., 2017] by suitable preconditioned iterative algorithms.
The theory of mean field games is a tool to understand noncooperative dynamic stochastic games with a large number of players. Much of the theory has evolved under conditions ensuring uniqueness of the mean field game Nash equilibrium. However, in some situations, typically involving symmetry breaking, non-uniqueness of solutions is an essential feature. To investigate the nature of non-unique solutions, this paper focuses on the technically simple setting where players have one of two states, with continuous time dynamics, and the game is symmetric in the players, and players are restricted to using Markov strategies. All the mean field game Nash equilibria are identified for a symmetric follow the crowd game. Such equilibria correspond to symmetric $epsilon$-Nash Markov equilibria for $N$ players with $epsilon$ converging to zero as $N$ goes to infinity. In contrast to the mean field game, there is a unique Nash equilibrium for finite $N.$ It is shown that fluid limits arising from the Nash equilibria for finite $N$ as $N$ goes to infinity are mean field game Nash equilibria, and evidence is given supporting the conjecture that such limits, among all mean field game Nash equilibria, are the ones that are stable fixed points of the mean field best response mapping.
We analyze a (possibly degenerate) second order mean field games system of partial differential equations. The distinguishing features of the model considered are (1) that it is not uniformly parabolic, including the first order case as a possibility, and (2) the coupling is a local operator on the density. As a result we look for weak, not smooth, solutions. Our main result is the existence and uniqueness of suitably defined weak solutions, which are characterized as minimizers of two optimal control problems. We also show that such solutions are stable with respect to the data, so that in particular the degenerate case can be approximated by a uniformly parabolic (viscous) perturbation.