Do you want to publish a course? Click here

Simple Stochastic Games with Almost-Sure Energy-Parity Objectives are in NP and coNP

94   0   0.0 ( 0 )
 Added by Patrick Totzke
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We study stochastic games with energy-parity objectives, which combine quantitative rewards with a qualitative $omega$-regular condition: The maximizer aims to avoid running out of energy while simultaneously satisfying a parity condition. We show that the corresponding almost-sure problem, i.e., checking whether there exists a maximizer strategy that achieves the energy-parity objective with probability $1$ when starting at a given energy level $k$, is decidable and in $NP cap coNP$. The same holds for checking if such a $k$ exists and if a given $k$ is minimal.



rate research

Read More

We study turn-based stochastic zero-sum games with lexicographic preferences over reachability and safety objectives. Stochastic games are standard models in control, verification, and synthesis of stochastic reactive systems that exhibit both randomness as well as angelic and demonic non-determinism. Lexicographic order allows to consider multiple objectives with a strict preference order over the satisfaction of the objectives. To the best of our knowledge, stochastic games with lexicographic objectives have not been studied before. We establish determinacy of such games and present strategy and computational complexity results. For strategy complexity, we show that lexicographically optimal strategies exist that are deterministic and memory is only required to remember the already satisfied and violated objectives. For a constant number of objectives, we show that the relevant decision problem is in NP $cap$ coNP, matching the current known bound for single objectives; and in general the decision problem is PSPACE-hard and can be solved in NEXPTIME $cap$ coNEXPTIME. We present an algorithm that computes the lexicographically optimal strategies via a reduction to computation of optimal strategies in a sequence of single-objectives games. We have implemented our algorithm and report experimental results on various case studies.
We generalise the hyperplane separation technique (Chatterjee and Velner, 2013) from multi-dimensional mean-payoff to energy games, and achieve an algorithm for solving the latter whose running time is exponential only in the dimension, but not in the number of vertices of the game graph. This answers an open question whether energy games with arbitrary initial credit can be solved in pseudo-polynomial time for fixed dimensions 3 or larger (Chaloupka, 2013). It also improves the complexity of solving multi-dimensional energy games with given initial credit from non-elementary (Brazdil, Janv{c}ar, and Kuv{c}era, 2010) to 2EXPTIME, thus establishing their 2EXPTIME-completeness.
171 - Hugo Gimbert 2009
Simple stochastic games are two-player zero-sum stochastic games with turn-based moves, perfect information, and reachability winning conditions. We present two new algorithms computing the values of simple stochastic games. Both of them rely on the existence of optimal permutation strategies, a class of positional strategies derived from permutations of the random vertices. The permutation-enumeration algorithm performs an exhaustive search among these strategies, while the permutation-improvement algorithm is based on successive improvements, `a la Hoffman-Karp. Our algorithms improve previously known algorithms in several aspects. First they run in polynomial time when the number of random vertices is fixed, so the problem of solving simple stochastic games is fixed-parameter tractable when the parameter is the number of random vertices. Furthermore, our algorithms do not require the input game to be transformed into a stopping game. Finally, the permutation-enumeration algorithm does not use linear programming, while the permutation-improvement algorithm may run in polynomial time.
The window mechanism was introduced by Chatterjee et al. to reinforce mean-payoff and total-payoff objectives with time bounds in two-player turn-based games on graphs. It has since proved useful in a variety of settings, including parity objectives in games and both mean-payoff and parity objectives in Markov decision processes. We study window parity objectives in timed automata and timed games: given a bound on the window size, a path satisfies such an objective if, in all states along the path, we see a sufficiently small window in which the smallest priority is even. We show that checking that all time-divergent paths of a timed automaton satisfy such a window parity objective can be done in polynomial space, and that the corresponding timed games can be solved in exponential time. This matches the complexity class of timed parity games, while adding the ability to reason about time bounds. We also consider multi-dimensional objectives and show that the complexity class does not increase. To the best of our knowledge, this is the first study of the window mechanism in a real-time setting.
This article extends the idea of solving parity games by strategy iteration to non-deterministic strategies: In a non-deterministic strategy a player restricts himself to some non-empty subset of possible actions at a given node, instead of limiting himself to exactly one action. We show that a strategy-improvement algorithm by by Bjoerklund, Sandberg, and Vorobyov can easily be adapted to the more general setting of non-deterministic strategies. Further, we show that applying the heuristic of all profitable switches leads to choosing a locally optimal successor strategy in the setting of non-deterministic strategies, thereby obtaining an easy proof of an algorithm by Schewe. In contrast to the algorithm by Bjoerklund et al., we present our algorithm directly for parity games which allows us to compare it to the algorithm by Jurdzinski and Voege: We show that the valuations used in both algorithm coincide on parity game arenas in which one player can surrender. Thus, our algorithm can also be seen as a generalization of the one by Jurdzinski and Voege to non-deterministic strategies. Finally, using non-deterministic strategies allows us to show that the number of improvement steps is bound from above by O(1.724^n). For strategy-improvement algorithms, this bound was previously only known to be attainable by using randomization.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا