ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Noisy Hedonic Games

151   0   0.0 ( 0 )
 نشر من قبل Prashant Trivedi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the learning task of prediction of formation of core stable coalition structure in hedonic games based on agents noisy preferences. We have considered two cases: complete information (noisy preferences of all the agents are entirely known) and partial information (noisy preferences over some coalitions are only known). We introduce a noise model that probabilistically scales the valuations of coalitions. The performance metric is the probability of our prediction conditioned on all or few noisy preferences of the agents be correct. The nature of our results is that this prediction probability is relatively low, including being zero, and rarely it is one. In the complete information two-agent model, in which each agent `retains or `inflates the values of its coalitions, we identify the expressions of the prediction probabilities in terms of the noise probability. We identify the interval of the noise probability such that the prediction probability is at least a user-given threshold. It turned out that, for some noisy games, the noise probability interval does not exist for a threshold as low as 0.1481, thus demonstrating that the prediction probabilities are generally low even in this model. In the partial information setup, we consider $n$ agent games with $l$ support of noise values, and such noisy preferences are available for some coalitions only. We obtain the bounds on the prediction probability of a partition to be $epsilon$-PAC stable in the noise-free game in the cases when the realized noisy game has or hasnt $epsilon$-PAC stable outcome.

قيم البحث

اقرأ أيضاً

132 - Dong Hao , Zhihai Rong , Tao Zhou 2014
Repeated game theory has been one of the most prevailing tools for understanding the long-run relationships, which are footstones in building human society. Recent works have revealed a new set of zero-determinant (ZD) strategies, which is an importa nt advance in repeated games. A ZD strategy player can exert a unilaterally control on two players payoffs. In particular he can deterministically set the opponents payoff, or enforce an unfair linear relationship between the players payoffs, thereby always seizing an advantageous share of payoffs. One of the limitations of the original ZD strategy, however, is that it does not capture the notion of robustness when the game is subjected to stochastic errors. In this paper, we propose a general model of ZD strategies for noisy repeated games, and find that ZD strategies have high robustness against errors. We further derive the pinning strategy under noise, by which the ZD strategy player coercively set the opponents expected payoff to his desired level, although his payoff control ability declines with the increase of noise strength. Due to the uncertainty caused by noise, the ZD strategy player cannot secure his payoff to be higher than the opponents, which implies strong extortions do not exist even under low noise. While we show that the ZD strategy player can still establish a novel kind of extortions, named weak extortions, where any increase of his own payoff always exceeds that of the opponents by a fixed percentage, and the conditions under which the weak extortions can be realized are more stringent as the noise becomes stronger.
208 - Yuanyuan Shi , Baosen Zhang 2019
This paper examines the convergence of no-regret learning in Cournot games with continuous actions. Cournot games are the essential model for many socio-economic systems, where players compete by strategically setting their output quantity. We assume that players do not have full information of the game and thus cannot pre-compute a Nash equilibrium. Two types of feedback are considered: one is bandit feedback and the other is gradient feedback. To study the convergence of the induced sequence of play, we introduce the notion of convergence in measure, and show that the players actual sequence of action converges to the unique Nash equilibrium. In addition, our results naturally extend the no-regret learning algorithms time-average regret bounds to obtain the final-iteration convergence rates. Together, our work presents significantly sharper convergence results for learning in games without strong assumptions on game property (e.g., monotonicity) and shows how exploiting the game information feedback can influence the convergence rates.
It is known that there are uncoupled learning heuristics leading to Nash equilibrium in all finite games. Why should players use such learning heuristics and where could they come from? We show that there is no uncoupled learning heuristic leading to Nash equilibrium in all finite games that a player has an incentive to adopt, that would be evolutionary stable or that could learn itself. Rather, a player has an incentive to strategically teach such a learning opponent in order secure at least the Stackelberg leader payoff. The impossibility result remains intact when restricted to the classes of generic games, two-player games, potential games, games with strategic complements or 2x2 games, in which learning is known to be nice. More generally, it also applies to uncoupled learning heuristics leading to correlated equilibria, rationalizable outcomes, iterated admissible outcomes, or minimal curb sets. A possibility result restricted to strategically trivial games fails if some generic games outside this class are considered as well.
We consider a game-theoretic model of information retrieval with strategic authors. We examine two different utility schemes: authors who aim at maximizing exposure and authors who want to maximize active selection of their content (i.e. the number o f clicks). We introduce the study of author learning dynamics in such contexts. We prove that under the probability ranking principle (PRP), which forms the basis of the current state of the art ranking methods, any better-response learning dynamics converges to a pure Nash equilibrium. We also show that other ranking methods induce a strategic environment under which such a convergence may not occur.
We study multi-agent reinforcement learning (MARL) in infinite-horizon discounted zero-sum Markov games. We focus on the practical but challenging setting of decentralized MARL, where agents make decisions without coordination by a centralized contro ller, but only based on their own payoffs and local actions executed. The agents need not observe the opponents actions or payoffs, possibly being even oblivious to the presence of the opponent, nor be aware of the zero-sum structure of the underlying game, a setting also referred to as radically uncoupled in the literature of learning in games. In this paper, we develop for the first time a radically uncoupled Q-learning dynamics that is both rational and convergent: the learning dynamics converges to the best response to the opponents strategy when the opponent follows an asymptotically stationary strategy; the value function estimates converge to the payoffs at a Nash equilibrium when both agents adopt the dynamics. The key challenge in this decentralized setting is the non-stationarity of the learning environment from an agents perspective, since both her own payoffs and the system evolution depend on the actions of other agents, and each agent adapts their policies simultaneously and independently. To address this issue, we develop a two-timescale learning dynamics where each agent updates her local Q-function and value function estimates concurrently, with the latter happening at a slower timescale.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا