No Arabic abstract
We study a two-player nonzero-sum stochastic differential game where one player controls the state variable via additive impulses while the other player can stop the game at any time. The main goal of this work is characterize Nash equilibria through a verification theorem, which identifies a new system of quasi-variational inequalities whose solution gives equilibrium payoffs with the correspondent strategies. Moreover, we apply the verification theorem to a game with a one-dimensional state variable, evolving as a scaled Brownian motion, and with linear payoff and costs for both players. Two types of Nash equilibrium are fully characterized, i.e. semi-explicit expressions for the equilibrium strategies and associated payoffs are provided. Both equilibria are of threshold type: in one equilibrium players intervention are not simultaneous, while in the other one the first player induces her competitor to stop the game. Finally, we provide some numerical results describing the qualitative properties of both types of equilibrium.
In 2002, Benjamin Jourdain and Claude Martini discovered that for a class of payoff functions, the pricing problem for American options can be reduced to pricing of European options for an appropriately associated payoff, all within a Black-Scholes framework. This discovery has been investigated in great detail by Soren Christensen, Jan Kallsen and Matthias Lenga in a recent work in 2020. In the present work we prove that this phenomenon can be observed in a wider context, and even holds true in a setup of non-linear stochastic processes. We analyse this problem from both probabilistic and analytic viewpoints. In the classical situation, Jourdain and Martini used this method to approximate prices of American put options. The broader applicability now potentially covers non-linear frameworks such as model uncertainty and controller-and-stopper-games.
In this paper we consider non zero-sum games where multiple players control the drift of a process, and their payoffs depend on its ergodic behaviour. We establish their connection with systems of Ergodic BSDEs, and prove the existence of a Nash equilibrium under the generalised Isaacs conditions. We also study the case of interacting players of different type.
We study zero-sum stochastic differential games where the state dynamics of the two players is governed by a generalized McKean-Vlasov (or mean-field) stochastic differential equation in which the distribution of both state and controls of each player appears in the drift and diffusion coefficients, as well as in the running and terminal payoff functions. We prove the dynamic programming principle (DPP) in this general setting, which also includes the control case with only one player, where it is the first time that DPP is proved for open-loop controls. We also show that the upper and lower value functions are viscosity solutions to a corresponding upper and lower Master Bellman-Isaacs equation. Our results extend the seminal work of Fleming and Souganidis [15] to the McKean-Vlasov setting.
In this Note, assuming that the generator is uniform Lipschitz in the unknown variables, we relate the solution of a one dimensional backward stochastic differential equation with the value process of a stochastic differential game. Under a domination condition, a filtration-consistent evaluations is also related to a stochastic differential game. This relation comes out of a min-max representation for uniform Lipschitz functions as affine functions. The extension to reflected backward stochastic differential equations is also included.
In this paper we deal with the problem of existence of a smooth solution of the Hamilton-Jacobi-Bellman-Isaacs (HJBI for short) system of equations associated with nonzero-sum stochastic differential games. We consider the problem in unbounded domains either in the case of continuous generators or for discontinuous ones. In each case we show the existence of a smooth solution of the system. As a consequence, we show that the game has smooth Nash payoffs which are given by means of the solution of the HJBI system and the stochastic process which governs the dynamic of the controlled system.