No Arabic abstract
We consider two-player zero-sum differential games (ZSDGs), where the state process (dynamical system) depends on the random initial condition and the state processs distribution, and the objective functional includes the state processs distribution and the random target variable. Unlike ZSDGs studied in the existing literature, the ZSDG of this paper introduces a new technical challenge, since the corresponding (lower and upper) value functions are defined on $mathcal{P}_2$ (the set of probability measures with finite second moments) or $mathcal{L}_2$ (the set of random variables with finite second moments), both of which are infinite-dimensional spaces. We show that the (lower and upper) value functions on $mathcal{P}_2$ and $mathcal{L}_2$ are equivalent (law invariant) and continuous, satisfying dynamic programming principles. We use the notion of derivative of a function of probability measures in $mathcal{P}_2$ and its lifted version in $mathcal{L}_2$ to show that the (lower and upper) value functions are unique viscosity solutions to the associated (lower and upper) Hamilton-Jacobi-Isaacs equations that are (infinite-dimensional) first-order PDEs on $mathcal{P}_2$ and $mathcal{L}_2$, where the uniqueness is obtained via the comparison principle. Under the Isaacs condition, we show that the ZSDG has a value.
The paper studies the open-loop saddle point and the open-loop lower and upper values, as well as their relationship for two-person zero-sum stochastic linear-quadratic (LQ, for short) differential games with deterministic coefficients. It derives a necessary condition for the finiteness of the open-loop lower and upper values and a sufficient condition for the existence of an open-loop saddle point. It turns out that under the sufficient condition, a strongly regular solution to the associated Riccati equation uniquely exists, in terms of which a closed-loop representation is further established for the open-loop saddle point. Examples are presented to show that the finiteness of the open-loop lower and upper values does not ensure the existence of an open-loop saddle point in general. But for the classical deterministic LQ game, these two issues are equivalent and both imply the solvability of the Riccati equation, for which an explicit representation of the solution is obtained.
In this paper, we consider a distributed learning problem in a subnetwork zero-sum game, where agents are competing in different subnetworks. These agents are connected through time-varying graphs where each agent has its own cost function and can receive information from its neighbors. We propose a distributed mirror descent algorithm for computing a Nash equilibrium and establish a sublinear regret bound on the sequence of iterates when the graphs are uniformly strongly connected and the cost functions are convex-concave. Moreover, we prove its convergence with suitably selected diminishing stepsizes for a strictly convex-concave cost function. We also consider a constant step-size variant of the algorithm and establish an asymptotic error bound between the cost function values of running average actions and a Nash equilibrium. In addition, we apply the algorithm to compute a mixed-strategy Nash equilibrium in subnetwork zero-sum finite-strategy games, which have merely convex-concave (to be specific, multilinear) cost functions, and obtain a final-iteration convergence result and an ergodic convergence result, respectively, under different assumptions.
In this paper we consider non zero-sum games where multiple players control the drift of a process, and their payoffs depend on its ergodic behaviour. We establish their connection with systems of Ergodic BSDEs, and prove the existence of a Nash equilibrium under the generalised Isaacs conditions. We also study the case of interacting players of different type.
We study zero-sum stochastic differential games where the state dynamics of the two players is governed by a generalized McKean-Vlasov (or mean-field) stochastic differential equation in which the distribution of both state and controls of each player appears in the drift and diffusion coefficients, as well as in the running and terminal payoff functions. We prove the dynamic programming principle (DPP) in this general setting, which also includes the control case with only one player, where it is the first time that DPP is proved for open-loop controls. We also show that the upper and lower value functions are viscosity solutions to a corresponding upper and lower Master Bellman-Isaacs equation. Our results extend the seminal work of Fleming and Souganidis [15] to the McKean-Vlasov setting.
In this article, we propose a general framework for the study of differential inclusions in the Wasserstein space of probability measures. Based on earlier geometric insights on the structure of continuity equations, we define solutions of differential inclusions as absolutely continuous curves whose driving velocity fields are measurable selections of multifunction taking their values in the space of vector fields. In this general setting, we prove three of the founding results of the theory of differential inclusions: Filippovs theorem, the Relaxation theorem, and the compactness of the solution sets. These contributions -- which are based on novel estimates on solutions of continuity equations -- are then applied to derive a new existence result for fully non-linear mean-field optimal control problems with closed-loop controls.