Do you want to publish a course? Click here

From the master equation to mean field game limit theory: Large deviations and concentration of measure

143   0   0.0 ( 0 )
 Added by Daniel Lacker
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

We study a sequence of symmetric $n$-player stochastic differential games driven by both idiosyncratic and common sources of noise, in which players interact with each other through their empirical distribution. The unique Nash equilibrium empirical measure of the $n$-player game is known to converge, as $n$ goes to infinity, to the unique equilibrium of an associated mean field game. Under suitable regularity conditions, in the absence of common noise, we complement this law of large numbers result with non-asymptotic concentration bounds for the Wasserstein distance between the $n$-player Nash equilibrium empirical measure and the mean field equilibrium. We also show that the sequence of Nash equilibrium empirical measures satisfies a weak large deviation principle, which can be strengthened to a full large deviation principle only in the absence of common noise. For both sets of results, we first use the master equation, an infinite-dimensional partial differential equation that characterizes the value function of the mean field game, to construct an associated McKean-Vlasov interacting $n$-particle system that is exponentially close to the Nash equilibrium dynamics of the $n$-player game for large $n$, by refining estimates obtained in our companion paper. Then we establish a weak large deviation principle for McKean-Vlasov systems in the presence of common noise. In the absence of common noise, we upgrade this to a full large deviation principle and obtain new concentration estimates for McKean-Vlasov systems. Finally, in two specific examples that do not satisfy the assumptions of our main theorems, we show how to adapt our methodology to establish large deviations and concentration results.



rate research

Read More

Mean field games (MFGs) describe the limit, as $n$ tends to infinity, of stochastic differential games with $n$ players interacting with one another through their common empirical distribution. Under suitable smoothness assumptions that guarantee uniqueness of the MFG equilibrium, a form of law of large of numbers (LLN), also known as propagation of chaos, has been established to show that the MFG equilibrium arises as the limit of the sequence of empirical measures of the $n$-player game Nash equilibria, including the case when player dynamics are driven by both idiosyncratic and common sources of noise. The proof of convergence relies on the so-called master equation for the value function of the MFG, a partial differential equation on the space of probability measures. In this work, under additional assumptions, we establish a functional central limit theorem (CLT) that characterizes the limiting fluctuations around the LLN limit as the unique solution of a linear stochastic PDE. The key idea is to use the solution to the master equation to construct an associated McKean-Vlasov interacting $n$-particle system that is sufficiently close to the Nash equilibrium dynamics of the $n$-player game for large $n$. We then derive the CLT for the latter from the CLT for the former. Along the way, we obtain a new multidimensional CLT for McKean-Vlasov systems. We also illustrate the broader applicability of our methodology by applying it to establish a CLT for a specific linear-quadratic example that does not satisfy our main assumptions, and we explicitly solve the resulting stochastic PDE in this case.
62 - Daniel Lacker 2018
This paper continues the study of the mean field game (MFG) convergence problem: In what sense do the Nash equilibria of $n$-player stochastic differential games converge to the mean field game as $nrightarrowinfty$? Previous work on this problem took two forms. First, when the $n$-player equilibria are open-loop, compactness arguments permit a characterization of all limit points of $n$-player equilibria as weak MFG equilibria, which contain additional randomness compared to the standard (strong) equilibrium concept. On the other hand, when the $n$-player equilibria are closed-loop, the convergence to the MFG equilibrium is known only when the MFG equilibrium is unique and the associated master equation is solvable and sufficiently smooth. This paper adapts the compactness arguments to the closed-loop case, proving a convergence theorem that holds even when the MFG equilibrium is non-unique. Every limit point of $n$-player equilibria is shown to be the same kind of weak MFG equilibrium as in the open-loop case. Some partial results and examples are discussed for the converse question, regarding which of the weak MFG equilibria can arise as the limit of $n$-player (approximate) equilibria.
In this paper we study several aspects of the growth of a supercritical Galton-Watson process {Z_n:nge1}, and bring out some criticality phenomena determined by the Schroder constant. We develop the local limit theory of Z_n, that is, the behavior of P(Z_n=v_n) as v_n earrow infty, and use this to study conditional large deviations of {Y_{Z_n}:nge1}, where Y_n satisfies an LDP, particularly of {Z_n^{-1}Z_{n+1}:nge1} conditioned on Z_nge v_n.
Corrections and acknowledgment for ``Local limit theory and large deviations for supercritical branching processes [math.PR/0407059]
124 - Rene Carmona 2014
We use a simple N-player stochastic game with idiosyncratic and common noises to introduce the concept of Master Equation originally proposed by Lions in his lectures at the Coll`ege de France. Controlling the limit N tends to the infinity of the explicit solution of the N-player game, we highlight the stochastic nature of the limit distributions of the states of the players due to the fact that the random environment does not average out in the limit, and we recast the Mean Field Game (MFG) paradigm in a set of coupled Stochastic Partial Differential Equations (SPDEs). The first one is a forward stochastic Kolmogorov equation giving the evolution of the conditional distributions of the states of the players given the common noise. The second is a form of stochastic Hamilton Jacobi Bellman (HJB) equation providing the solution of the optimization problem when the flow of conditional distributions is given. Being highly coupled, the system reads as an infinite dimensional Forward Backward Stochastic Differential Equation (FBSDE). Uniqueness of a solution and its Markov property lead to the representation of the solution of the backward equation (i.e. the value function of the stochastic HJB equation) as a deterministic function of the solution of the forward Kolmogorov equation, function which is usually called the decoupling field of the FBSDE. The (infinite dimensional) PDE satisfied by this decoupling field is identified with the textit{master equation}. We also show that this equation can be derived for other large populations equilibriums like those given by the optimal control of McKean-Vlasov stochastic differential equations. The paper is written more in the style of a review than a technical paper, and we spend more time and energy motivating and explaining the probabilistic interpretation of the Master Equation, than identifying the most general set of assumptions under which our claims are true.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا