No Arabic abstract
Mean Field Games with state constraints are differential games with infinitely many agents, each agent facing a constraint on his state. The aim of this paper is to provide a meaning of the PDE system associated with these games, the so-called Mean Field Game system with state constraints. For this, we show a global semiconvavity property of the value function associated with optimal control problems with state constraints.
The aim of this paper is to study first order Mean field games subject to a linear controlled dynamics on $mathbb R^{d}$. For this kind of problems, we define Nash equilibria (called Mean Field Games equilibria), as Borel probability measures on the space of admissible trajectories, and mild solutions as solutions associated with such equilibria. Moreover, we prove the existence and uniqueness of mild solutions and we study their regularity: we prove Holder regularity of Mean Field Games equilibria and fractional semiconcavity for the value function of the underlying optimal control problem. Finally, we address the PDEs system associated with the Mean Field Games problem and we prove that the class of mild solutions coincides with a suitable class of weak solutions.
We propose a new viewpoint on variational mean-field games with diffusion and quadratic Hamiltonian. We show the equivalence of such mean-field games with a relative entropy minimization at the level of probabilities on curves. We also address the time-discretization of such problems, establish $Gamma$-convergence results as the time step vanishes and propose an efficient algorithm relying on this entropic interpretation as well as on the Sinkhorn scaling algorithm.
We analyze a (possibly degenerate) second order mean field games system of partial differential equations. The distinguishing features of the model considered are (1) that it is not uniformly parabolic, including the first order case as a possibility, and (2) the coupling is a local operator on the density. As a result we look for weak, not smooth, solutions. Our main result is the existence and uniqueness of suitably defined weak solutions, which are characterized as minimizers of two optimal control problems. We also show that such solutions are stable with respect to the data, so that in particular the degenerate case can be approximated by a uniformly parabolic (viscous) perturbation.
The theory of mean field games is a tool to understand noncooperative dynamic stochastic games with a large number of players. Much of the theory has evolved under conditions ensuring uniqueness of the mean field game Nash equilibrium. However, in some situations, typically involving symmetry breaking, non-uniqueness of solutions is an essential feature. To investigate the nature of non-unique solutions, this paper focuses on the technically simple setting where players have one of two states, with continuous time dynamics, and the game is symmetric in the players, and players are restricted to using Markov strategies. All the mean field game Nash equilibria are identified for a symmetric follow the crowd game. Such equilibria correspond to symmetric $epsilon$-Nash Markov equilibria for $N$ players with $epsilon$ converging to zero as $N$ goes to infinity. In contrast to the mean field game, there is a unique Nash equilibrium for finite $N.$ It is shown that fluid limits arising from the Nash equilibria for finite $N$ as $N$ goes to infinity are mean field game Nash equilibria, and evidence is given supporting the conjecture that such limits, among all mean field game Nash equilibria, are the ones that are stable fixed points of the mean field best response mapping.
In this paper, we develop a PDE approach to consider the optimal strategy of mean field controlled stochastic system. Firstly, we discuss mean field SDEs and associated Fokker-Plank eqautions. Secondly, we consider a fully-coupled system of forward-backward PDEs. The backward one is the Hamilton-Jacobi-Bellman equation while the forward one is the Fokker-Planck equation. Our main result is to show the existence of classical solutions of the forward-backward PDEs in the class $H^{1+frac{1}{4},2+frac{1}{2}}([0,T]timesmathbb{R}^n)$ by use of the Schauder fixed point theorem. Then, we use the solution to give the optimal strategy of the mean field stochastic control problem. Finally, we give an example to illustrate the role of our main result.