No Arabic abstract
Mean-field games with absorption is a class of games, that have been introduced in Campi and Fischer [7] and that can be viewed as natural limits of symmetric stochastic differential games with a large number of players who, interacting through a mean-field, leave the game as soon as their private states hit some given boundary. In this paper, we push the study of such games further, extending their scope along two main directions. First, a direct dependence on past absorptions has been introduced in the drift of players state dynamics. Second, the boundedness of coefficients and costs has been considerably relaxed including drift and costs with linear growth. Therefore, the mean-field interaction among the players takes place in two ways: via the empirical sub-probability measure of the surviving players and through a process representing the fraction of past absorptions over time. Moreover, relaxing the boundedness of the coefficients allows for more realistic dynamics for players private states. We prove existence of solutions of the mean-field game in strict as well as relaxed feedback form. Finally, we show that such solutions induce approximate Nash equilibria for the $N$-player game with vanishing error in the mean-field limit as $N to infty$.
Forcing finite state mean field games by a relevant form of common noise is a subtle issue, which has been addressed only recently. Among others, one possible way is to subject the simplex valued dynamics of an equilibrium by a so-called Wright-Fisher noise, very much in the spirit of stochastic models in population genetics. A key feature is that such a random forcing preserves the structure of the simplex, which is nothing but, in this setting, the probability space over the state space of the game. The purpose of this article is hence to elucidate the finite player version and, accordingly, to prove that $N$-player equilibria indeed converge towards the solution of such a kind of Wright-Fisher mean field game. Whilst part of the analysis is made easier by the fact that the corresponding master equation has already been proved to be uniquely solvable under the presence of the common noise, it becomes however more subtle than in the standard setting because the mean field interaction between the players now occurs through a weighted empirical measure. In other words, each player carries its own weight, which hence may differ from $1/N$ and which, most of all, evolves with the common noise.
We study the asymptotic organization among many optimizing individuals interacting in a suitable moderate way. We justify this limiting game by proving that its solution provides approximate Nash equilibria for large but finite player games. This proof depends upon the derivation of a law of large numbers for the empirical processes in the limit as the number of players tends to infinity. Because it is of independent interest, we prove this result in full detail. We characterize the solutions of the limiting game via a verification argument.
In this paper, we develop a PDE approach to consider the optimal strategy of mean field controlled stochastic system. Firstly, we discuss mean field SDEs and associated Fokker-Plank eqautions. Secondly, we consider a fully-coupled system of forward-backward PDEs. The backward one is the Hamilton-Jacobi-Bellman equation while the forward one is the Fokker-Planck equation. Our main result is to show the existence of classical solutions of the forward-backward PDEs in the class $H^{1+frac{1}{4},2+frac{1}{2}}([0,T]timesmathbb{R}^n)$ by use of the Schauder fixed point theorem. Then, we use the solution to give the optimal strategy of the mean field stochastic control problem. Finally, we give an example to illustrate the role of our main result.
We study Nash equilibria for a sequence of symmetric $N$-player stochastic games of finite-fuel capacity expansion with singular controls and their mean-field game (MFG) counterpart. We construct a solution of the MFG via a simple iterative scheme that produces an optimal control in terms of a Skorokhod reflection at a (state-dependent) surface that splits the state space into action and inaction regions. We then show that a solution of the MFG of capacity expansion induces approximate Nash equilibria for the $N$-player games with approximation error $varepsilon$ going to zero as $N$ tends to infinity. Our analysis relies entirely on probabilistic methods and extends the well-known connection between singular stochastic control and optimal stopping to a mean-field framework.
We study a class of linear-quadratic stochastic differential games in which each player interacts directly only with its nearest neighbors in a given graph. We find a semi-explicit Markovian equilibrium for any transitive graph, in terms of the empirical eigenvalue distribution of the graphs normalized Laplacian matrix. This facilitates large-population asymptotics for various graph sequences, with several sparse and dense examples discussed in detail. In particular, the mean field game is the correct limit only in the dense graph case, i.e., when the degrees diverge in a suitable sense. Even though equilibrium strategies are nonlocal, depending on the behavior of all players, we use a correlation decay estimate to prove a propagation of chaos result in both the dense and sparse regimes, with the sparse case owing to the large distances between typical vertices. Without assuming the graphs are transitive, we show also that the mean field game solution can be used to construct decentralized approximate equilibria on any sufficiently dense graph sequence.