ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Q-Learning for Nash Equilibria: Nash-DQN

122   0   0.0 ( 0 )
 نشر من قبل Sebastian Jaimungal
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Model-free learning for multi-agent stochastic games is an active area of research. Existing reinforcement learning algorithms, however, are often restricted to zero-sum games, and are applicable only in small state-action spaces or other simplified settings. Here, we develop a new data efficient Deep-Q-learning methodology for model-free learning of Nash equilibria for general-sum stochastic games. The algorithm uses a local linear-quadratic expansion of the stochastic game, which leads to analytically solvable optimal actions. The expansion is parametrized by deep neural networks to give it sufficient flexibility to learn the environment without the need to experience all state-action pairs. We study symmetry properties of the algorithm stemming from label-invariant stochastic games and as a proof of concept, apply our algorithm to learning optimal trading strategies in competitive electronic markets.



قيم البحث

اقرأ أيضاً

Generative adversarial networks (GANs) represent a zero-sum game between two machine players, a generator and a discriminator, designed to learn the distribution of data. While GANs have achieved state-of-the-art performance in several benchmark lear ning tasks, GAN minimax optimization still poses great theoretical and empirical challenges. GANs trained using first-order optimization methods commonly fail to converge to a stable solution where the players cannot improve their objective, i.e., the Nash equilibrium of the underlying game. Such issues raise the question of the existence of Nash equilibrium solutions in the GAN zero-sum game. In this work, we show through several theoretical and numerical results that indeed GAN zero-sum games may not have any local Nash equilibria. To characterize an equilibrium notion applicable to GANs, we consider the equilibrium of a new zero-sum game with an objective function given by a proximal operator applied to the original objective, a solution we call the proximal equilibrium. Unlike the Nash equilibrium, the proximal equilibrium captures the sequential nature of GANs, in which the generator moves first followed by the discriminator. We prove that the optimal generative model in Wasserstein GAN problems provides a proximal equilibrium. Inspired by these results, we propose a new approach, which we call proximal training, for solving GAN problems. We discuss several numerical experiments demonstrating the existence of proximal equilibrium solutions in GAN minimax problems.
We reconsider the training objective of Generative Adversarial Networks (GANs) from the mixed Nash Equilibria (NE) perspective. Inspired by the classical prox methods, we develop a novel algorithmic framework for GANs via an infinite-dimensional two- player game and prove rigorous convergence rates to the mixed NE, resolving the longstanding problem that no provably convergent algorithm exists for general GANs. We then propose a principled procedure to reduce our novel prox methods to simple sampling routines, leading to practically efficient algorithms. Finally, we provide experimental evidence that our approach outperforms methods that seek pure strategy equilibria, such as SGD, Adam, and RMSProp, both in speed and quality.
177 - Patricia Bouyer 2015
We study pure-strategy Nash equilibria in multi-player concurrent deterministic games, for a variety of preference relations. We provide a novel construction, called the suspect game, which transforms a multi-player concurrent game into a two-player turn-based game which turns Nash equilibria into winning strategies (for some objective that depends on the preference relations of the players in the original game). We use that transformation to design algorithms for computing Nash equilibria in finite games, which in most cases have optimal worst-case complexity, for large classes of preference relations. This includes the purely qualitative framework, where each player has a single omega-regular objective that she wants to satisfy, but also the larger class of semi-quantitative objectives, where each player has several omega-regular objectives equipped with a preorder (for instance, a player may want to satisfy all her objectives, or to maximise the number of objectives that she achieves.)
We show that the BIMATRIX game does not have a fully polynomial-time approximation scheme, unless PPAD is in P. In other words, no algorithm with time polynomial in n and 1/epsilon can compute an epsilon-approximate Nash equilibrium of an n by nbimat rix game, unless PPAD is in P. Instrumental to our proof, we introduce a new discrete fixed-point problem on a high-dimensional cube with a constant side-length, such as on an n-dimensional cube with side-length 7, and show that they are PPAD-complete. Furthermore, we prove, unless PPAD is in RP, that the smoothed complexity of the Lemke-Howson algorithm or any algorithm for computing a Nash equilibrium of a bimatrix game is polynomial in n and 1/sigma under perturbations with magnitude sigma. Our result answers a major open question in the smoothed analysis of algorithms and the approximation of Nash equilibria.
We present the concept of a Generalized Feedback Nash Equilibrium (GFNE) in dynamic games, extending the Feedback Nash Equilibrium concept to games in which players are subject to state and input constraints. We formalize necessary and sufficient con ditions for (local) GFNE solutions at the trajectory level, which enable the development of efficient numerical methods for their computation. Specifically, we propose a Newton-style method for finding game trajectories which satisfy the necessary conditions, which can then be checked against the sufficiency conditions. We show that the evaluation of the necessary conditions in general requires computing a series of nested, implicitly-defined derivatives, which quickly becomes intractable. To this end, we introduce an approximation to the necessary conditions which is amenable to efficient evaluation, and in turn, computation of solutions. We term the solutions to the approximate necessary conditions Generalized Feedback Quasi Nash Equilibria (GFQNE), and we introduce numerical methods for their computation. In particular, we develop a Sequential Linear-Quadratic Game approach, in which a locally approximate LQ game is solved at each iteration. The development of this method relies on the ability to compute a GFNE to inequality- and equality-constrained LQ games, and therefore specific methods for the solution of these special cases are developed in detail. We demonstrate the effectiveness of the proposed solution approach on a dynamic game arising in an autonomous driving application.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا