No Arabic abstract
This paper is concerned with a Stackelberg stochastic differential game on a finite horizon in feedback information pattern. A system of parabolic partial differential equations is obtained at the level of Hamiltonian to give the verification theorem of the feedback Stackelberg equilibrium. As an example, a linear quadratic Stackelberg stochastic differential game is investigated. Riccati equations are introduced to express the feedback Stackelberg equilibrium, analytical and numerical solutions to these Riccati equations are discussed in some special cases.
We study stochastic differential games of jump diffusions, where the players have access to inside information. Our approach is based on anticipative stochastic calculus, white noise, Hida-Malliavin calculus, forward integrals and the Donsker delta functional. We obtain a characterization of Nash equilibria of such games in terms of the corresponding Hamiltonians. This is used to study applications to insider games in finance, specifically optimal insider consumption and optimal insider portfolio under model uncertainty.
This paper studies a stochastic robotic surveillance problem where a mobile robot moves randomly on a graph to capture a potential intruder that strategically attacks a location on the graph. The intruder is assumed to be omniscient: it knows the current location of the mobile agent and can learn the surveillance strategy. The goal for the mobile robot is to design a stochastic strategy so as to maximize the probability of capturing the intruder. We model the strategic interactions between the surveillance robot and the intruder as a Stackelberg game, and optimal and suboptimal Markov chain based surveillance strategies in star, complete and line graphs are studied. We first derive a universal upper bound on the capture probability, i.e., the performance limit for the surveillance agent. We show that this upper bound is tight in the complete graph and further provide suboptimality guarantees for a natural design. For the star and line graphs, we first characterize dominant strategies for the surveillance agent and the intruder. Then, we rigorously prove the optimal strategy for the surveillance agent.
We study a class of deterministic finite-horizon two-player nonzero-sum differential games where players are endowed with different kinds of controls. We assume that Player 1 uses piecewise-continuous controls, while Player 2 uses impulse controls. For this class of games, we seek to derive conditions for the existence of feedback Nash equilibrium strategies for the players. More specifically, we provide a verification theorem for identifying such equilibrium strategies, using the Hamilton-Jacobi-Bellman (HJB) equations for Player 1 and the quasi-variational inequalities (QVIs) for Player 2. Further, we show that the equilibrium number of interventions by Player 2 is upper bounded. Furthermore, we specialize the obtained results to a scalar two-player linear-quadratic differential game. In this game, Player 1s objective is to drive the state variable towards a specific target value, and Player 2 has a similar objective with a different target value. We provide, for the first time, an analytical characterization of the feedback Nash equilibrium in a linear-quadratic differential game with impulse control. We illustrate our results using numerical experiments.
Stochastic differential games have been used extensively to model agents competitions in Finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel efficient tool for finding Markovian Nash equilibrium of large $N$-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221-245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into $N$ sub-optimization problems, and identifies each players optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an $eps$-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.
The paper studies the open-loop saddle point and the open-loop lower and upper values, as well as their relationship for two-person zero-sum stochastic linear-quadratic (LQ, for short) differential games with deterministic coefficients. It derives a necessary condition for the finiteness of the open-loop lower and upper values and a sufficient condition for the existence of an open-loop saddle point. It turns out that under the sufficient condition, a strongly regular solution to the associated Riccati equation uniquely exists, in terms of which a closed-loop representation is further established for the open-loop saddle point. Examples are presented to show that the finiteness of the open-loop lower and upper values does not ensure the existence of an open-loop saddle point in general. But for the classical deterministic LQ game, these two issues are equivalent and both imply the solvability of the Riccati equation, for which an explicit representation of the solution is obtained.