No Arabic abstract
The goal of this paper is to study the long time behavior of solutions of the first-order mean field game (MFG) systems with a control on the acceleration. The main issue for this is the lack of small time controllability of the problem, which prevents to define the associated ergodic mean field game problem in the standard way. To overcome this issue, we first study the long-time average of optimal control problems with control on the acceleration: we prove that the time average of the value function converges to an ergodic constant and represent this ergodic constant as a minimum of a Lagrangian over a suitable class of closed probability measure. This characterization leads us to define the ergodic MFG problem as a fixed-point problem on the set of closed probability measures. Then we also show that this MFG ergodic problem has at least one solution, that the associated ergodic constant is unique under the standard mono-tonicity assumption and that the time-average of the value function of the time-dependent MFG problem with control of acceleration converges to this ergodic constant.
In this paper, we show existence and uniqueness of solutions of the infinite horizon McKean-Vlasov FBSDEs using two different methods, which lead to two different sets of assumptions. We use these results to solve the infinite horizon mean field type control problems and mean field games.
In the present work, we study deterministic mean field games (MFGs) with finite time horizon in which the dynamics of a generic agent is controlled by the acceleration. They are described by a system of PDEs coupling a continuity equation for the density of the distribution of states (forward in time) and a Hamilton-Jacobi (HJ) equation for the optimal value of a representative agent (backward in time). The state variable is the pair $(x, v)in R^Ntimes R^N$ where x stands for the position and v stands for the velocity. The dynamics is often referred to as the double integrator. In this case, the Hamiltonian of the system is neither strictly convex nor coercive, hence the available results on MFGs cannot be applied. Moreover, we will assume that the Hamiltonian is unbounded w.r.t. the velocity variable v. We prove the existence of a weak solution of the MFG system via a vanishing viscosity method and we characterize the distribution of states as the image of the initial distribution by the flow associated with the optimal control.
We present an example of symmetric ergodic $N$-players differential games, played in memory strategies on the position of the players, for which the limit set, as $Nto +infty$, of Nash equilibrium payoffs is large, although the game has a single mean field game equilibrium. This example is in sharp contrast with a result by Lacker [23] for finite horizon problems.
The aim of this paper is to study the long time behavior of solutions to deterministic mean field games systems on Euclidean space. This problem was addressed on the torus ${mathbb T}^n$ in [P. Cardaliaguet, {it Long time average of first order mean field games and weak KAM theory}, Dyn. Games Appl. 3 (2013), 473-488], where solutions are shown to converge to the solution of a certain ergodic mean field games system on ${mathbb T}^n$. By adapting the approach in [A. Fathi, E. Maderna, {it Weak KAM theorem on non compact manifolds}, NoDEA Nonlinear Differential Equations Appl. 14 (2007), 1-27], we identify structural conditions on the Lagrangian, under which the corresponding ergodic system can be solved in $mathbb{R}^{n}$. Then we show that time dependent solutions converge to the solution of such a stationary system on all compact subsets of the whole space.
We study a family of McKean-Vlasov (mean-field) type ergodic optimal control problems with linear control, and quadratic dependence on control of the cost function. For this class of problems we establish existence and uniqueness of an optimal control. We propose an $N$-particles Markovian optimal control problem approximating the McKean-Vlasov one and we prove the convergence in relative entropy, total variation and Wasserstein distance of the law of the former to the law of the latter when $N$ goes to infinity. Some McKean-Vlasov optimal control problems with singular cost function and the relation of these problems with the mathematical theory of Bose-Einstein condensation is also established.