We study Nash equilibria for a sequence of symmetric $N$-player stochastic games of finite-fuel capacity expansion with singular controls and their mean-field game (MFG) counterpart. We construct a solution of the MFG via a simple iterative scheme that produces an optimal control in terms of a Skorokhod reflection at a (state-dependent) surface that splits the state space into action and inaction regions. We then show that a solution of the MFG of capacity expansion induces approximate Nash equilibria for the $N$-player games with approximation error $varepsilon$ going to zero as $N$ tends to infinity. Our analysis relies entirely on probabilistic methods and extends the well-known connection between singular stochastic control and optimal stopping to a mean-field framework.
Forcing finite state mean field games by a relevant form of common noise is a subtle issue, which has been addressed only recently. Among others, one possible way is to subject the simplex valued dynamics of an equilibrium by a so-called Wright-Fisher noise, very much in the spirit of stochastic models in population genetics. A key feature is that such a random forcing preserves the structure of the simplex, which is nothing but, in this setting, the probability space over the state space of the game. The purpose of this article is hence to elucidate the finite player version and, accordingly, to prove that $N$-player equilibria indeed converge towards the solution of such a kind of Wright-Fisher mean field game. Whilst part of the analysis is made easier by the fact that the corresponding master equation has already been proved to be uniquely solvable under the presence of the common noise, it becomes however more subtle than in the standard setting because the mean field interaction between the players now occurs through a weighted empirical measure. In other words, each player carries its own weight, which hence may differ from $1/N$ and which, most of all, evolves with the common noise.
In this paper, we develop a PDE approach to consider the optimal strategy of mean field controlled stochastic system. Firstly, we discuss mean field SDEs and associated Fokker-Plank eqautions. Secondly, we consider a fully-coupled system of forward-backward PDEs. The backward one is the Hamilton-Jacobi-Bellman equation while the forward one is the Fokker-Planck equation. Our main result is to show the existence of classical solutions of the forward-backward PDEs in the class $H^{1+frac{1}{4},2+frac{1}{2}}([0,T]timesmathbb{R}^n)$ by use of the Schauder fixed point theorem. Then, we use the solution to give the optimal strategy of the mean field stochastic control problem. Finally, we give an example to illustrate the role of our main result.
Mean-field games with absorption is a class of games, that have been introduced in Campi and Fischer [7] and that can be viewed as natural limits of symmetric stochastic differential games with a large number of players who, interacting through a mean-field, leave the game as soon as their private states hit some given boundary. In this paper, we push the study of such games further, extending their scope along two main directions. First, a direct dependence on past absorptions has been introduced in the drift of players state dynamics. Second, the boundedness of coefficients and costs has been considerably relaxed including drift and costs with linear growth. Therefore, the mean-field interaction among the players takes place in two ways: via the empirical sub-probability measure of the surviving players and through a process representing the fraction of past absorptions over time. Moreover, relaxing the boundedness of the coefficients allows for more realistic dynamics for players private states. We prove existence of solutions of the mean-field game in strict as well as relaxed feedback form. Finally, we show that such solutions induce approximate Nash equilibria for the $N$-player game with vanishing error in the mean-field limit as $N to infty$.
A theory of existence and uniqueness is developed for general stochastic differential mean field games with common noise. The concepts of strong and weak solutions are introduced in analogy with the theory of stochastic differential equations, and existence of weak solutions for mean field games is shown to hold under very general assumptions. Examples and counter-examples are provided to enlighten the underpinnings of the existence theory. Finally, an analog of the famous result of Yamada and Watanabe is derived, and it is used to prove existence and uniqueness of a strong solution under additional assumptions.
The purpose of this paper is to provide a complete probabilistic analysis of a large class of stochastic differential games for which the interaction between the players is of mean-field type. We implement the Mean-Field Games strategy developed analytically by Lasry and Lions in a purely probabilistic framework, relying on tailor-made forms of the stochastic maximum principle. While we assume that the state dynamics are affine in the states and the controls, our assumptions on the nature of the costs are rather weak, and surprisingly, the dependence of all the coefficients upon the statistical distribution of the states remains of a rather general nature. Our probabilistic approach calls for the solution of systems of forward-backward stochastic differential equations of a McKean-Vlasov type for which no existence result is known, and for which we prove existence and regularity of the corresponding value function. Finally, we prove that solutions of the mean-field game as formulated by Lasry and Lions do indeed provide approximate Nash equilibriums for games with a large number of players, and we quantify the nature of the approximation.