ترغب بنشر مسار تعليمي؟ اضغط هنا

Two-Person Zero-Sum Stochastic Linear-Quadratic Differential Games

252   0   0.0 ( 0 )
 نشر من قبل Jingrui Sun
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English
 تأليف Jingrui Sun




اسأل ChatGPT حول البحث

The paper studies the open-loop saddle point and the open-loop lower and upper values, as well as their relationship for two-person zero-sum stochastic linear-quadratic (LQ, for short) differential games with deterministic coefficients. It derives a necessary condition for the finiteness of the open-loop lower and upper values and a sufficient condition for the existence of an open-loop saddle point. It turns out that under the sufficient condition, a strongly regular solution to the associated Riccati equation uniquely exists, in terms of which a closed-loop representation is further established for the open-loop saddle point. Examples are presented to show that the finiteness of the open-loop lower and upper values does not ensure the existence of an open-loop saddle point in general. But for the classical deterministic LQ game, these two issues are equivalent and both imply the solvability of the Riccati equation, for which an explicit representation of the solution is obtained.

قيم البحث

اقرأ أيضاً

82 - Jun Moon , Tamer Basar 2019
We consider two-player zero-sum differential games (ZSDGs), where the state process (dynamical system) depends on the random initial condition and the state processs distribution, and the objective functional includes the state processs distribution and the random target variable. Unlike ZSDGs studied in the existing literature, the ZSDG of this paper introduces a new technical challenge, since the corresponding (lower and upper) value functions are defined on $mathcal{P}_2$ (the set of probability measures with finite second moments) or $mathcal{L}_2$ (the set of random variables with finite second moments), both of which are infinite-dimensional spaces. We show that the (lower and upper) value functions on $mathcal{P}_2$ and $mathcal{L}_2$ are equivalent (law invariant) and continuous, satisfying dynamic programming principles. We use the notion of derivative of a function of probability measures in $mathcal{P}_2$ and its lifted version in $mathcal{L}_2$ to show that the (lower and upper) value functions are unique viscosity solutions to the associated (lower and upper) Hamilton-Jacobi-Isaacs equations that are (infinite-dimensional) first-order PDEs on $mathcal{P}_2$ and $mathcal{L}_2$, where the uniqueness is obtained via the comparison principle. Under the Isaacs condition, we show that the ZSDG has a value.
We study linear-quadratic stochastic differential games on directed chains inspired by the directed chain stochastic differential equations introduced by Detering, Fouque, and Ichiba. We solve explicitly for Nash equilibria with a finite number of pl ayers and we study more general finite-player games with a mixture of both directed chain interaction and mean field interaction. We investigate and compare the corresponding games in the limit when the number of players tends to infinity. The limit is characterized by Catalan functions and the dynamics under equilibrium is an infinite-dimensional Gaussian process described by a Catalan Markov chain, with or without the presence of mean field interaction.
The study of linear-quadratic stochastic differential games on directed networks was initiated in Feng, Fouque & Ichiba cite{fengFouqueIchiba2020linearquadratic}. In that work, the game on a directed chain with finite or infinite players was defined as well as the game on a deterministic directed tree, and their Nash equilibria were computed. The current work continues the analysis by first developing a random directed chain structure by assuming the interaction between every two neighbors is random. We solve explicitly for an open-loop Nash equilibrium for the system and we find that the dynamics under equilibrium is an infinite-dimensional Gaussian process described by a Catalan Markov chain introduced in cite{fengFouqueIchiba2020linearquadratic}. The discussion about stochastic differential games is extended to a random two-sided directed chain and a random directed tree structure.
In this paper we consider non zero-sum games where multiple players control the drift of a process, and their payoffs depend on its ergodic behaviour. We establish their connection with systems of Ergodic BSDEs, and prove the existence of a Nash equi librium under the generalised Isaacs conditions. We also study the case of interacting players of different type.
79 - Huyen Pham 2018
We study zero-sum stochastic differential games where the state dynamics of the two players is governed by a generalized McKean-Vlasov (or mean-field) stochastic differential equation in which the distribution of both state and controls of each playe r appears in the drift and diffusion coefficients, as well as in the running and terminal payoff functions. We prove the dynamic programming principle (DPP) in this general setting, which also includes the control case with only one player, where it is the first time that DPP is proved for open-loop controls. We also show that the upper and lower value functions are viscosity solutions to a corresponding upper and lower Master Bellman-Isaacs equation. Our results extend the seminal work of Fleming and Souganidis [15] to the McKean-Vlasov setting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا