We study a class of non linear integro-differential equations on the Wasserstein space related to the optimal control of McKean--Vlasov jump-diffusions. We develop an intrinsic notion of viscosity solutions that does not rely on the lifting to an Hilbert space and prove a comparison theorem for these solutions. We also show that the value function is the unique viscosity solution.
We study the optimal control of path-dependent McKean-Vlasov equations valued in Hilbert spaces motivated by non Markovian mean-field models driven by stochastic PDEs. We first establish the well-posedness of the state equation, and then we prove the
dynamic programming principle (DPP) in such a general framework. The crucial law invariance property of the value function V is rigorously obtained, which means that V can be viewed as a function on the Wasserstein space of probability measures on the set of continuous functions valued in Hilbert space. We then define a notion of pathwise measure derivative, which extends the Wasserstein derivative due to Lions [41], and prove a related functional It{^o} formula in the spirit of Dupire [24] and Wu and Zhang [51]. The Master Bellman equation is derived from the DPP by means of a suitable notion of viscosity solution. We provide different formulations and simplifications of such a Bellman equation notably in the special case when there is no dependence on the law of the control.
This paper rigorously connects the problem of optimal control of McKean-Vlasov dynamics with large systems of interacting controlled state processes. Precisely, the empirical distributions of near-optimal control-state pairs for the $n$-state systems
, as $n$ tends to infinity, admit limit points in distribution (if the objective functions are suitably coercive), and every such limit is supported on the set of optimal control-state pairs for the McKean-Vlasov problem. Conversely, any distribution on the set of optimal control-state pairs for the McKean-Vlasov problem can be realized as a limit in this manner. Arguments are based on controlled martingale problems, which lend themselves naturally to existence proofs; along the way it is shown that a large class of McKean-Vlasov control problems admit optimal Markovian controls.
We consider $mathbb{R}^d$-valued diffusion processes of type begin{align*} dX_t = b(X_t)dt, +, dB_t. end{align*} Assuming a geometric drift condition, we establish contractions of the transitions kernels in Kantorovich ($L^1$ Wasserstein) dista
nces with explicit constants. Our results are in the spirit of Hairer and Mattinglys extension of Harris Theorem. In particular, they do not rely on a small set condition. Instead we combine Lyapunov functions with reflection coupling and concave distance functions. We retrieve constants that are explicit in parameters which can be computed with little effort from one-sided Lipschitz conditions for the drift coefficient and the growth of a chosen Lyapunov function. Consequences include exponential convergence in weighted total variation norms, gradient bounds, bounds for ergodic averages, and Kantorovich contractions for nonlinear McKean-Vlasov diffusions in the case of sufficiently weak but not necessarily bounded nonlinearities. We also establish quantitative bounds for sub-geometric ergodicity assuming a sub-geometric drift condition.
In this paper, we show existence and uniqueness of solutions of the infinite horizon McKean-Vlasov FBSDEs using two different methods, which lead to two different sets of assumptions. We use these results to solve the infinite horizon mean field type control problems and mean field games.
Various particle filters have been proposed over the last couple of decades with the common feature that the update step is governed by a type of control law. This feature makes them an attractive alternative to traditional sequential Monte Carlo whi
ch scales poorly with the state dimension due to weight degeneracy. This article proposes a unifying framework that allows to systematically derive the McKean-Vlasov representations of these filters for the discrete time and continuous time observation case, taking inspiration from the smooth approximation of the data considered in Crisan & Xiong (2010) and Clark & Crisan (2005). We consider three filters that have been proposed in the literature and use this framework to derive It^{o} representations of their limiting forms as the approximation parameter $delta rightarrow 0$. All filters require the solution of a Poisson equation defined on $mathbb{R}^{d}$, for which existence and uniqueness of solutions can be a non-trivial issue. We additionally establish conditions on the signal-observation system that ensures well-posedness of the weighted Poisson equation arising in one of the filters.