Do you want to publish a course? Click here

Discounted semi-Markov games with incomplete information on one side

146   0   0.0 ( 0 )
 Added by Zhong-Wei Liao
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

This work considers two-player zero-sum semi-Markov games with incomplete information on one side and perfect observation. At the beginning, the system selects a game type according to a given probability distribution and informs to Player 1 only. After each stage, the actions chosen are observed by both players before proceeding to the next stage. Firstly, we show the existence of the value function under the expected discount criterion and the optimality equation. Secondly, the existence and iterative algorithm of the optimal policy for Player 1 are introduced through the optimality equation of value function. Moreove, About the optimal policy for the uninformed Player 2, we define the auxiliary dual games and construct a new optimality equation for the value function in the dual games, which implies the existence of the optimal policy for Player 2 in the dual game. Finally, the existence and iterative algorithm of the optimal policy for Player 2 in the original game is given by the results of the dual game.



rate research

Read More

We study the optimal use of information in Markov games with incomplete information on one side and two states. We provide a finite-stage algorithm for calculating the limit value as the gap between stages goes to 0, and an optimal strategy for the informed player in the limiting game in continuous time. This limiting strategy induces an-optimal strategy for the informed player, provided the gap between stages is small. Our results demonstrate when the informed player should use his information and how.
We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, while the non-informed player only observes his opponents actions. We show the existence of a limit value as the time span between two consecutive stages vanishes; this value is characterized through an auxiliary optimization problem and as the solution of an Hamilton-Jacobi equation.
This paper deals with control of partially observable discrete-time stochastic systems. It introduces and studies the class of Markov Decision Processes with Incomplete information and with semi-uniform Feller transition probabilities. The important feature of this class of models is that the classic reduction of such a model with incomplete observation to the completely observable Markov Decision Process with belief states preserves semi-uniform Feller continuity of transition probabilities. Under mild assumptions on cost functions, optimal policies exist, optimality equations hold, and value iterations converge to optimal values for this class of models. In particular, for Partially Observable Markov Decision Processes the results of this paper imply new and generalize several known sufficient conditions on transition and observation probabilities for the existence of optimal policies, validity of optimality equations, and convergence of value iterations.
We study discrete-time discounted constrained Markov decision processes (CMDPs) on Borel spaces with unbounded reward functions. In our approach the transition probability functions are weakly or set-wise continuous. The reward functions are upper semicontinuous in state-action pairs or semicontinuous in actions. Our aim is to study models with unbounded reward functions, which are often encountered in applications, e.g., in consumption/investment problems. We provide some general assumptions under which the optimization problems in CMDPs are solvable in the class of stationary randomized policies. Then, we indicate that if the initial distribution and transition probabilities are non-atomic, then using a general purification result of Feinberg and Piunovskiy, stationary optimal policies can be deterministic. Our main results are illustrated by five examples.
Mean field games are concerned with the limit of large-population stochastic differential games where the agents interact through their empirical distribution. In the classical setting, the number of players is large but fixed throughout the game. However, in various applications, such as population dynamics or economic growth, the number of players can vary across time which may lead to different Nash equilibria. For this reason, we introduce a branching mechanism in the population of agents and obtain a variation on the mean field game problem. As a first step, we study a simple model using a PDE approach to illustrate the main differences with the classical setting. We prove existence of a solution and show that it provides an approximate Nash-equilibrium for large population games. We also present a numerical example for a linear--quadratic model. Then we study the problem in a general setting by a probabilistic approach. It is based upon the relaxed formulation of stochastic control problems which allows us to obtain a general existence result.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا