No Arabic abstract
In this paper, we present an extension of Uzawas algorithm and apply it to build approximating sequences of mean field games systems. We prove that Uzawas iterations can be used in a more general situation than the one in it is usually used. We then present some numerical results of those iterations on discrete mean field games systems of optimal stopping, impulse control and continuous control.
In this note we prove the uniqueness of solutions to a class of Mean Field Games systems subject to possibly degenerate individual noise. Our results hold true for arbitrary long time horizons and for general non-separable Hamiltonians that satisfy a so-called $displacement monotonicity$ condition. Ours are the first global in time uniqueness results, beyond the well-known Lasry-Lions monotonicity condition, for the Mean Field Games systems involving non-separable Hamiltonians. The displacement monotonicity assumptions imposed on the data provide actually not only uniqueness, but also the existence and regularity of the solutions.
We study in this paper three aspects of Mean Field Games. The first one is the case when the dynamics of each player depend on the strategies of the other players. The second one concerns the modeling of noise in discrete space models and the formulation of the Master Equation in this case. Finally, we show how Mean Field Games reduce to agent based models when the intertemporal preference rate goes to infinity, i.e. when the anticipation of the players vanishes.
This work establishes the equivalence between Mean Field Game and a class of compressible Navier-Stokes equations for their connections by Hamilton-Jacobi-Bellman equations. The existence of the Nash Equilibrium of the Mean Field Game, and hence the solvability of Navier-Stokes equations, are provided under a set of conditions.
This paper is interested in the problem of optimal stopping in a mean field game context. The notion of mixed solution is introduced to solve the system of partial differential equations which models this kind of problem. This notion emphasizes the fact that Nash equilibria of the game are in mixed strategies. Existence and uniqueness of such solutions are proved under general assumptions for both stationary and evolutive problems.
In the present work, we study deterministic mean field games (MFGs) with finite time horizon in which the dynamics of a generic agent is controlled by the acceleration. They are described by a system of PDEs coupling a continuity equation for the density of the distribution of states (forward in time) and a Hamilton-Jacobi (HJ) equation for the optimal value of a representative agent (backward in time). The state variable is the pair $(x, v)in R^Ntimes R^N$ where x stands for the position and v stands for the velocity. The dynamics is often referred to as the double integrator. In this case, the Hamiltonian of the system is neither strictly convex nor coercive, hence the available results on MFGs cannot be applied. Moreover, we will assume that the Hamiltonian is unbounded w.r.t. the velocity variable v. We prove the existence of a weak solution of the MFG system via a vanishing viscosity method and we characterize the distribution of states as the image of the initial distribution by the flow associated with the optimal control.