ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Complexity of Computing Markov Perfect Equilibrium in General-Sum Stochastic Games

103   0   0.0 ( 0 )
 نشر من قبل Yuhao Li
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Similar to the role of Markov decision processes in reinforcement learning, Stochastic Games (SGs) lay the foundation for the study of multi-agent reinforcement learning (MARL) and sequential agent interactions. In this paper, we derive that computing an approximate Markov Perfect Equilibrium (MPE) in a finite-state discounted Stochastic Game within the exponential precision is textbf{PPAD}-complete. We adopt a function with a polynomially bounded description in the strategy space to convert the MPE computation to a fixed-point problem, even though the stochastic game may demand an exponential number of pure strategies, in the number of states, for each agent. The completeness result follows the reduction of the fixed-point problem to {sc End of the Line}. Our results indicate that finding an MPE in SGs is highly unlikely to be textbf{NP}-hard unless textbf{NP}=textbf{co-NP}. Our work offers confidence for MARL research to study MPE computation on general-sum SGs and to develop fruitful algorithms as currently on zero-sum SGs.

قيم البحث

اقرأ أيضاً

This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value function or the model is parameterized by general function classes. Provably efficient algorit hms for both decoupled and {coordinated} settings are developed. In the {decoupled} setting where the agent controls a single player and plays against an arbitrary opponent, we propose a new model-free algorithm. The sample complexity is governed by the Minimax Eluder dimension -- a new dimension of the function class in Markov games. As a special case, this method improves the state-of-the-art algorithm by a $sqrt{d}$ factor in the regret when the reward function and transition kernel are parameterized with $d$-dimensional linear features. In the {coordinated} setting where both players are controlled by the agent, we propose a model-based algorithm and a model-free algorithm. In the model-based algorithm, we prove that sample complexity can be bounded by a generalization of Witness rank to Markov games. The model-free algorithm enjoys a $sqrt{K}$-regret upper bound where $K$ is the number of episodes. Our algorithms are based on new techniques of alternate optimism.
We study multi-agent reinforcement learning (MARL) in infinite-horizon discounted zero-sum Markov games. We focus on the practical but challenging setting of decentralized MARL, where agents make decisions without coordination by a centralized contro ller, but only based on their own payoffs and local actions executed. The agents need not observe the opponents actions or payoffs, possibly being even oblivious to the presence of the opponent, nor be aware of the zero-sum structure of the underlying game, a setting also referred to as radically uncoupled in the literature of learning in games. In this paper, we develop for the first time a radically uncoupled Q-learning dynamics that is both rational and convergent: the learning dynamics converges to the best response to the opponents strategy when the opponent follows an asymptotically stationary strategy; the value function estimates converge to the payoffs at a Nash equilibrium when both agents adopt the dynamics. The key challenge in this decentralized setting is the non-stationarity of the learning environment from an agents perspective, since both her own payoffs and the system evolution depend on the actions of other agents, and each agent adapts their policies simultaneously and independently. To address this issue, we develop a two-timescale learning dynamics where each agent updates her local Q-function and value function estimates concurrently, with the latter happening at a slower timescale.
We present fictitious play dynamics for stochastic games and analyze its convergence properties in zero-sum stochastic games. Our dynamics involves players forming beliefs on opponent strategy and their own continuation payoff (Q-function), and playi ng a greedy best response using estimated continuation payoffs. Players update their beliefs from observations of opponent actions. A key property of the learning dynamics is that update of the beliefs on Q-functions occurs at a slower timescale than update of the beliefs on strategies. We show both in the model-based and model-free cases (without knowledge of player payoff functions and state transition probabilities), the beliefs on strategies converge to a stationary mixed Nash equilibrium of the zero-sum stochastic game.
208 - Hugo Gimbert 2013
We prove that optimal strategies exist in every perfect-information stochastic game with finitely many states and actions and a tail winning condition.
The fast-growing market of autonomous vehicles, unmanned aerial vehicles, and fleets in general necessitates the design of smart and automatic navigation systems considering the stochastic latency along different paths in the traffic network. The lon gstanding shortest path problem in a deterministic network, whose counterpart in a congestion game setting is Wardrop equilibrium, has been studied extensively, but it is well known that finding the notion of an optimal path is challenging in a traffic network with stochastic arc delays. In this work, we propose three classes of risk-averse equilibria for an atomic stochastic congestion game in its general form where the arc delay distributions are load dependent and not necessarily independent of each other. The three classes are risk-averse equilibrium (RAE), mean-variance equilibrium (MVE), and conditional value at risk level $alpha$ equilibrium (CVaR$_alpha$E) whose notions of risk-averse best responses are based on maximizing the probability of taking the shortest path, minimizing a linear combination of mean and variance of path delay, and minimizing the expected delay at a specified risky quantile of the delay distributions, respectively. We prove that for any finite stochastic atomic congestion game, the risk-averse, mean-variance, and CVaR$_alpha$ equilibria exist. We show that for risk-averse travelers, the Braess paradox may not occur to the extent presented originally since players do not necessarily travel along the shortest path in expectation, but they take the uncertainty of travel time into consideration as well. We show through some examples that the price of anarchy can be improved when players are risk-averse and travel according to one of the three classes of risk-averse equilibria rather than the Wardrop equilibrium.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا