ترغب بنشر مسار تعليمي؟ اضغط هنا

Continuous Behavioural Function Equilibria and Approximation Schemes in Bayesian Games with Non-Finite Type and Action Spaces

88   0   0.0 ( 0 )
 نشر من قبل Shaoyan Guo
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Meirowitz [17] showed existence of continuous behavioural function equilibria for Bayesian games with non-finite type and action spaces. A key condition for the proof of the existence result is equi-continuity of behavioural functions which, according to Meirowitz [17, page 215], is likely to fail or difficult to verify. In this paper, we advance the research by presenting some verifiable conditions for the required equi-continuity, namely some growth conditions of the expected utility functions of each player at equilibria. In the case when the growth is of second order, we demonstrate that the condition is guaranteed by strong concavity of the utility function. Moreover, by using recent research on polynomial decision rules and optimal discretization approaches in stochastic and robust optimization, we propose some approximation schemes for the Bayesian equilibrium problem: first, by restricting the behavioral functions to polynomial functions of certain order over the space of types, we demonstrate that solving a Bayesian polynomial behavioural function equilibrium is down to solving a finite dimensional stochastic equilibrium problem; second, we apply the optimal quantization method due to Pflug and Pichler [18] to develop an effective discretization scheme for solving the latter. Error bounds are derived for the respective approximation schemes under moderate conditions and both aca- demic examples and numerical results are presented to explain the Bayesian equilibrium problem and their approximation schemes.



قيم البحث

اقرأ أيضاً

Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in Open AI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.
We study quitting games and define the concept of absorption paths, which is an alternative definition to strategy profiles that accomodates both discrete time aspects and continuous time aspects, and is parameterized by the total probability of abso rption in past play rather than by time. We then define the concept of sequentially 0perfect absorption paths, which are shown to be limits of $epsilon$-equilibrium strategy profiles as $epsilon$ goes to 0. We finally identify a class of quitting games that possess sequentially 0-perfect absorption paths.
We prove that every repeated game with countably many players, finite action sets, and tail-measurable payoffs admits an $epsilon$-equilibrium, for every $epsilon > 0$.
We study a class of deterministic finite-horizon two-player nonzero-sum differential games where players are endowed with different kinds of controls. We assume that Player 1 uses piecewise-continuous controls, while Player 2 uses impulse controls. F or this class of games, we seek to derive conditions for the existence of feedback Nash equilibrium strategies for the players. More specifically, we provide a verification theorem for identifying such equilibrium strategies, using the Hamilton-Jacobi-Bellman (HJB) equations for Player 1 and the quasi-variational inequalities (QVIs) for Player 2. Further, we show that the equilibrium number of interventions by Player 2 is upper bounded. Furthermore, we specialize the obtained results to a scalar two-player linear-quadratic differential game. In this game, Player 1s objective is to drive the state variable towards a specific target value, and Player 2 has a similar objective with a different target value. We provide, for the first time, an analytical characterization of the feedback Nash equilibrium in a linear-quadratic differential game with impulse control. We illustrate our results using numerical experiments.
We study a wide class of non-convex non-concave min-max games that generalizes over standard bilinear zero-sum games. In this class, players control the inputs of a smooth function whose output is being applied to a bilinear zero-sum game. This class of games is motivated by the indirect nature of the competition in Generative Adversarial Networks, where players control the parameters of a neural network while the actual competition happens between the distributions that the generator and discriminator capture. We establish theoretically, that depending on the specific instance of the problem gradient-descent-ascent dynamics can exhibit a variety of behaviors antithetical to convergence to the game theoretically meaningful min-max solution. Specifically, different forms of recurrent behavior (including periodicity and Poincare recurrence) are possible as well as convergence to spurious (non-min-max) equilibria for a positive measure of initial conditions. At the technical level, our analysis combines tools from optimization theory, game theory and dynamical systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا