ترغب بنشر مسار تعليمي؟ اضغط هنا

A repeated network game where agents have quadratic utilities that depend on information externalities -- an unknown underlying state -- as well as payoff externalities -- the actions of all other agents in the network -- is considered. Agents play B ayesian Nash Equilibrium strategies with respect to their beliefs on the state of the world and the actions of all other nodes in the network. These beliefs are refined over subsequent stages based on the observed actions of neighboring peers. This paper introduces the Quadratic Network Game (QNG) filter that agents can run locally to update their beliefs, select corresponding optimal actions, and eventually learn a sufficient statistic of the networks state. The QNG filter is demonstrated on a Cournot market competition game and a coordination game to implement navigation of an autonomous team.
We study a model of information aggregation and social learning recently proposed by Jadbabaie, Sandroni, and Tahbaz-Salehi, in which individual agents try to learn a correct state of the world by iteratively updating their beliefs using private obse rvations and beliefs of their neighbors. No individual agents private signal might be informative enough to reveal the unknown state. As a result, agents share their beliefs with others in their social neighborhood to learn from each other. At every time step each agent receives a private signal, and computes a Bayesian posterior as an intermediate belief. The intermediate belief is then averaged with the belief of neighbors to form the individuals belief at next time step. We find a set of minimal sufficient conditions under which the agents will learn the unknown state and reach consensus on their beliefs without any assumption on the private signal structure. The key enabler is a result that shows that using this update, agents will eventually forecast the indefinite future correctly.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا