No Arabic abstract
A Dynkin game is a zero-sum, stochastic stopping game between two players where either player can stop the game at any time for an observable payoff. Typically the payoff process of the max-player is assumed to be smaller than the payoff process of the min-player, while the payoff process for simultaneous stopping is in between the two. In this paper, we study general Dynkin games whose payoff processes are in arbitrary positions. In both discrete and continuous time settings, we provide necessary and sufficient conditions for the existence of pure strategy Nash equilibria and epsilon-optimal stopping times in all possible subgames.
Mean-field games with absorption is a class of games, that have been introduced in Campi and Fischer [7] and that can be viewed as natural limits of symmetric stochastic differential games with a large number of players who, interacting through a mean-field, leave the game as soon as their private states hit some given boundary. In this paper, we push the study of such games further, extending their scope along two main directions. First, a direct dependence on past absorptions has been introduced in the drift of players state dynamics. Second, the boundedness of coefficients and costs has been considerably relaxed including drift and costs with linear growth. Therefore, the mean-field interaction among the players takes place in two ways: via the empirical sub-probability measure of the surviving players and through a process representing the fraction of past absorptions over time. Moreover, relaxing the boundedness of the coefficients allows for more realistic dynamics for players private states. We prove existence of solutions of the mean-field game in strict as well as relaxed feedback form. Finally, we show that such solutions induce approximate Nash equilibria for the $N$-player game with vanishing error in the mean-field limit as $N to infty$.
Mean-payoff games on timed automata are played on the infinite weighted graph of configurations of priced timed automata between two players, Player Min and Player Max, by moving a token along the states of the graph to form an infinite run. The goal of Player Min is to minimize the limit average weight of the run, while the goal of the Player Max is the opposite. Brenguier, Cassez, and Raskin recently studied a variation of these games and showed that mean-payoff games are undecidable for timed automata with five or more clocks. We refine this result by proving the undecidability of mean-payoff games with three clocks. On a positive side, we show the decidability of mean-payoff games on one-clock timed automata with binary price-rates. A key contribution of this paper is the application of dynamic programming based proof techniques applied in the context of average reward optimization on an uncountable state and action space.
In this paper, we consider the optimal stopping problem on semi-Markov processes (SMPs) with finite horizon, and aim to establish the existence and computation of optimal stopping times. To achieve the goal, we first develop the main results of finite horizon semi-Markov decision processes (SMDPs) to the case with additional terminal costs, introduce an explicit construction of SMDPs, and prove the equivalence between the optimal stopping problems on SMPs and SMDPs. Then, using the equivalence and the results on SMDPs developed here, we not only show the existence of optimal stopping time of SMPs, but also provide an algorithm for computing optimal stopping time on SMPs. Moreover, we show that the optimal and -optimal stopping time can be characterized by the hitting time of some special sets, respectively.
We revisit the classical singular control problem of minimizing running and controlling costs. The problem arises in inventory control, as well as in healthcare management and mathematical finance. Existing studies have shown the optimality of a barrier strategy when driven by the Brownian motion or Levy processes with one-side jumps. Under the assumption that the running cost function is convex, we show the optimality of a barrier strategy for a general class of Levy processes. Numerical results are also given.
Forcing finite state mean field games by a relevant form of common noise is a subtle issue, which has been addressed only recently. Among others, one possible way is to subject the simplex valued dynamics of an equilibrium by a so-called Wright-Fisher noise, very much in the spirit of stochastic models in population genetics. A key feature is that such a random forcing preserves the structure of the simplex, which is nothing but, in this setting, the probability space over the state space of the game. The purpose of this article is hence to elucidate the finite player version and, accordingly, to prove that $N$-player equilibria indeed converge towards the solution of such a kind of Wright-Fisher mean field game. Whilst part of the analysis is made easier by the fact that the corresponding master equation has already been proved to be uniquely solvable under the presence of the common noise, it becomes however more subtle than in the standard setting because the mean field interaction between the players now occurs through a weighted empirical measure. In other words, each player carries its own weight, which hence may differ from $1/N$ and which, most of all, evolves with the common noise.