We study stochastic differential games of jump diffusions, where the players have access to inside information. Our approach is based on anticipative stochastic calculus, white noise, Hida-Malliavin calculus, forward integrals and the Donsker delta functional. We obtain a characterization of Nash equilibria of such games in terms of the corresponding Hamiltonians. This is used to study applications to insider games in finance, specifically optimal insider consumption and optimal insider portfolio under model uncertainty.
We investigate a two-player zero-sum differential game with asymmetric information on the payoff and without Isaacs condition. The dynamics is an ordinary differential equation parametrised by two controls chosen by the players. Each player has a private information on the payoff of the game, while his opponent knows only the probability distribution on the information of the other player. We show that a suitable definition of random strategies allows to prove the existence of a value in mixed strategies. Moreover, the value function can be characterised in term of the unique viscosity solution in some dual sense of a Hamilton-Jacobi-Isaacs equation. Here we do not suppose the Isaacs condition which is usually assumed in differential games.
We study a class of deterministic finite-horizon two-player nonzero-sum differential games where players are endowed with different kinds of controls. We assume that Player 1 uses piecewise-continuous controls, while Player 2 uses impulse controls. For this class of games, we seek to derive conditions for the existence of feedback Nash equilibrium strategies for the players. More specifically, we provide a verification theorem for identifying such equilibrium strategies, using the Hamilton-Jacobi-Bellman (HJB) equations for Player 1 and the quasi-variational inequalities (QVIs) for Player 2. Further, we show that the equilibrium number of interventions by Player 2 is upper bounded. Furthermore, we specialize the obtained results to a scalar two-player linear-quadratic differential game. In this game, Player 1s objective is to drive the state variable towards a specific target value, and Player 2 has a similar objective with a different target value. We provide, for the first time, an analytical characterization of the feedback Nash equilibrium in a linear-quadratic differential game with impulse control. We illustrate our results using numerical experiments.
The objective of this paper is to analyze the existence of equilibria for a class of deterministic mean field games of controls. The interaction between players is due to both a congestion term and a price function which depends on the distributions of the optimal strategies. Moreover, final state and mixed state-control constraints are considered, the dynamics being nonlinear and affine with respect to the control. The existence of equilibria is obtained by Kakutanis theorem, applied to a fixed point formulation of the problem. Finally, uniqueness results are shown under monotonicity assumptions.
In this paper, we study the following nonlinear backward stochastic integral partial differential equation with jumps begin{equation*} left{ begin{split} -d V(t,x) =&displaystyleinf_{uin U}bigg{H(t,x,u, DV(t,x),D Phi(t,x), D^2 V(t,x),int_E left(mathcal I V(t,e,x,u)+Psi(t,x+g(t,e,x,u))right)l(t,e) u(de)) &+displaystyleint_{E}big[mathcal I V(t,e,x,u)-displaystyle (g(t, e,x,u), D V(t,x))big] u(d e)+int_{E}big[mathcal I Psi(t,e,x,u)big] u(d e)bigg}dt &-Phi(t,x)dW(t)-displaystyleint_{E} Psi (t, e,x)tildemu(d e,dt), V(T,x)=& h(x), end{split} right. end{equation*} where $tilde mu$ is a Poisson random martingale measure, $W$ is a Brownian motion, and $mathcal I$ is a non-local operator to be specified later. The function $H$ is a given random mapping, which arises from a corresponding non-Markovian optimal control problem. This equation appears as the stochastic Hamilton-Jacobi-Bellman equation, which characterizes the value function of the optimal control problem with a recursive utility cost functional. The solution to the equation is a predictable triplet of random fields $(V,Phi,Psi)$. We show that the value function, under some regularity assumptions, is the solution to the stochastic HJB equation; and a classical solution to this equation is the value function and gives the optimal control. With some additional assumptions on the coefficients, an existence and uniqueness result in the sense of Sobolev space is shown by recasting the backward stochastic partial integral differential equation with jumps as a backward stochastic evolution equation in Hilbert spaces with Poisson jumps.