Do you want to publish a course? Click here

Limit theory for controlled McKean-Vlasov dynamics

83   0   0.0 ( 0 )
 Added by Daniel Lacker
 Publication date 2016
  fields
and research's language is English
 Authors Daniel Lacker




Ask ChatGPT about the research

This paper rigorously connects the problem of optimal control of McKean-Vlasov dynamics with large systems of interacting controlled state processes. Precisely, the empirical distributions of near-optimal control-state pairs for the $n$-state systems, as $n$ tends to infinity, admit limit points in distribution (if the objective functions are suitably coercive), and every such limit is supported on the set of optimal control-state pairs for the McKean-Vlasov problem. Conversely, any distribution on the set of optimal control-state pairs for the McKean-Vlasov problem can be realized as a limit in this manner. Arguments are based on controlled martingale problems, which lend themselves naturally to existence proofs; along the way it is shown that a large class of McKean-Vlasov control problems admit optimal Markovian controls.



rate research

Read More

172 - Rene Carmona 2013
The purpose of this paper is to provide a detailed probabilistic analysis of the optimal control of nonlinear stochastic dynamical systems of the McKean Vlasov type. Motivated by the recent interest in mean field games, we highlight the connection and the differences between the two sets of problems. We prove a new version of the stochastic maximum principle and give sufficient conditions for existence of an optimal control. We also provide examples for which our sufficient conditions for existence of an optimal solution are satisfied. Finally we show that our solution to the control problem provides approximate equilibria for large stochastic games with mean field interactions.
79 - Huyen Pham 2018
We study zero-sum stochastic differential games where the state dynamics of the two players is governed by a generalized McKean-Vlasov (or mean-field) stochastic differential equation in which the distribution of both state and controls of each player appears in the drift and diffusion coefficients, as well as in the running and terminal payoff functions. We prove the dynamic programming principle (DPP) in this general setting, which also includes the control case with only one player, where it is the first time that DPP is proved for open-loop controls. We also show that the upper and lower value functions are viscosity solutions to a corresponding upper and lower Master Bellman-Isaacs equation. Our results extend the seminal work of Fleming and Souganidis [15] to the McKean-Vlasov setting.
We study a class of non linear integro-differential equations on the Wasserstein space related to the optimal control of McKean--Vlasov jump-diffusions. We develop an intrinsic notion of viscosity solutions that does not rely on the lifting to an Hilbert space and prove a comparison theorem for these solutions. We also show that the value function is the unique viscosity solution.
101 - Feng-Yu Wang 2021
By refining a recent result of Xie and Zhang, we prove the exponential ergodicity under a weighted variation norm for singular SDEs with drift containing a local integrable term and a coercive term. This result is then extended to singular reflecting SDEs as well as singular McKean-Vlasov SDEs with or without reflection. We also present a general result deducing the uniform ergodicity of McKean-Vlasov SDEs from that of classical SDEs. As an application, the $L^1$-exponential convergence is derived for a class of non-symmetric singular granular media equations.
We consider conditional McKean-Vlasov stochastic differential equations (SDEs), such as the ones arising in the large-system limit of mean field games and particle systems with mean field interactions when common noise is present. The conditional time-marginals of the solutions to these SDEs satisfy non-linear stochastic partial differential equations (SPDEs) of the second order, whereas the laws of the conditional time-marginals follow Fokker-Planck equations on the space of probability measures. We prove two superposition principles: The first establishes that any solution of the SPDE can be lifted to a solution of the conditional McKean-Vlasov SDE, and the second guarantees that any solution of the Fokker-Planck equation on the space of probability measures can be lifted to a solution of the SPDE. We use these results to obtain a mimicking theorem which shows that the conditional time-marginals of an Ito process can be emulated by those of a solution to a conditional McKean-Vlasov SDE with Markovian coefficients. This yields, in particular, a tool for converting open-loop controls into Markovian ones in the context of controlled McKean-Vlasov dynamics.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا