In this paper, we show existence and uniqueness of solutions of the infinite horizon McKean-Vlasov FBSDEs using two different methods, which lead to two different sets of assumptions. We use these results to solve the infinite horizon mean field type control problems and mean field games.
We study the optimal control of path-dependent McKean-Vlasov equations valued in Hilbert spaces motivated by non Markovian mean-field models driven by stochastic PDEs. We first establish the well-posedness of the state equation, and then we prove the dynamic programming principle (DPP) in such a general framework. The crucial law invariance property of the value function V is rigorously obtained, which means that V can be viewed as a function on the Wasserstein space of probability measures on the set of continuous functions valued in Hilbert space. We then define a notion of pathwise measure derivative, which extends the Wasserstein derivative due to Lions [41], and prove a related functional It{^o} formula in the spirit of Dupire [24] and Wu and Zhang [51]. The Master Bellman equation is derived from the DPP by means of a suitable notion of viscosity solution. We provide different formulations and simplifications of such a Bellman equation notably in the special case when there is no dependence on the law of the control.
We propose several algorithms to solve McKean-Vlasov Forward Backward Stochastic Differential Equations. Our schemes rely on the approximating power of neural networks to estimate the solution or its gradient through minimization problems. As a consequence, we obtain methods able to tackle both mean field games and mean field control problems in moderate dimension. We analyze the numerical behavior of our algorithms on several examples including non linear quadratic models.
In this paper, we consider the mean field game with a common noise and allow the state coefficients to vary with the conditional distribution in a nonlinear way. We assume that the cost function satisfies a convexity and a weak monotonicity property. We use the sufficient Pontryagin principle for optimality to transform the mean field control problem into existence and uniqueness of solution of conditional distribution dependent forward-backward stochastic differential equation (FBSDE). We prove the existence and uniqueness of solution of the conditional distribution dependent FBSDE when the dependence of the state on the conditional distribution is sufficiently small, or when the convexity parameter of the running cost on the control is sufficiently large. Two different methods are developed. The first method is based on a continuation of the coefficients, which is developed for FBSDE by Hu and Peng cite{YH2}. We apply the method to conditional distribution dependent FBSDE. The second method is to show the existence result on a small time interval by Banach fixed point theorem and then extend the local solution to the whole time interval.
The goal of this paper is to study the long time behavior of solutions of the first-order mean field game (MFG) systems with a control on the acceleration. The main issue for this is the lack of small time controllability of the problem, which prevents to define the associated ergodic mean field game problem in the standard way. To overcome this issue, we first study the long-time average of optimal control problems with control on the acceleration: we prove that the time average of the value function converges to an ergodic constant and represent this ergodic constant as a minimum of a Lagrangian over a suitable class of closed probability measure. This characterization leads us to define the ergodic MFG problem as a fixed-point problem on the set of closed probability measures. Then we also show that this MFG ergodic problem has at least one solution, that the associated ergodic constant is unique under the standard mono-tonicity assumption and that the time-average of the value function of the time-dependent MFG problem with control of acceleration converges to this ergodic constant.
Mean field games are concerned with the limit of large-population stochastic differential games where the agents interact through their empirical distribution. In the classical setting, the number of players is large but fixed throughout the game. However, in various applications, such as population dynamics or economic growth, the number of players can vary across time which may lead to different Nash equilibria. For this reason, we introduce a branching mechanism in the population of agents and obtain a variation on the mean field game problem. As a first step, we study a simple model using a PDE approach to illustrate the main differences with the classical setting. We prove existence of a solution and show that it provides an approximate Nash-equilibrium for large population games. We also present a numerical example for a linear--quadratic model. Then we study the problem in a general setting by a probabilistic approach. It is based upon the relaxed formulation of stochastic control problems which allows us to obtain a general existence result.
Erhan Bayraktar
,Xin Zhang
.
(2021)
.
"Solvability of Infinite horizon McKean-Vlasov FBSDEs in Mean Field Control Problems and Games"
.
Erhan Bayraktar
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا