No Arabic abstract
Suppose that an $m$-simplex is partitioned into $n$ convex regions having disjoint interiors and distinct labels, and we may learn the label of any point by querying it. The learning objective is to know, for any point in the simplex, a label that occurs within some distance $epsilon$ from that point. We present two algorithms for this task: Constant-Dimension Generalised Binary Search (CD-GBS), which for constant $m$ uses $poly(n, log left( frac{1}{epsilon} right))$ queries, and Constant-Region Generalised Binary Search (CR-GBS), which uses CD-GBS as a subroutine and for constant $n$ uses $poly(m, log left( frac{1}{epsilon} right))$ queries. We show via Kakutanis fixed-point theorem that these algorithms provide bounds on the best-response query complexity of computing approximate well-supported equilibria of bimatrix games in which one of the players has a constant number of pure strategies. We also partially extend our results to games with multiple players, establishing further query complexity bounds for computing approximate well-supported equilibria in this setting.
Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players). First, we provide and prove the correctness of an efficient algorithm to calculate an optimal (error minimizing) arrangement of players. Next, we analyze the relationship between the stability and optimality of an arrangement. First, we show that for some regions of parameter space, all stable arrangements are optimal (Price of Anarchy equal to 1). However, we show this is not true for all settings: there exist examples of stable arrangements with higher cost than optimal (Price of Anarchy greater than 1). Finally, we give the first constant-factor bound on the performance gap between stability and optimality, proving that the total error of the worst stable solution can be no higher than 9 times the total error of an optimal solution (Price of Anarchy bound of 9).
Distributed adaptive filtering has been considered as an effective approach for data processing and estimation over distributed networks. Most existing distributed adaptive filtering algorithms focus on designing different information diffusion rules, regardless of the nature evolutionary characteristic of a distributed network. In this paper, we study the adaptive network from the game theoretic perspective and formulate the distributed adaptive filtering problem as a graphical evolutionary game. With the proposed formulation, the nodes in the network are regarded as players and the local combiner of estimation information from different neighbors is regarded as different strategies selection. We show that this graphical evolutionary game framework is very general and can unify the existing adaptive network algorithms. Based on this framework, as examples, we further propose two error-aware adaptive filtering algorithms. Moreover, we use graphical evolutionary game theory to analyze the information diffusion process over the adaptive networks and evolutionarily stable strategy of the system. Finally, simulation results are shown to verify the effectiveness of our analysis and proposed methods.
We consider any network environment in which the best shot game is played. This is the case where the possible actions are only two for every node (0 and 1), and the best response for a node is 1 if and only if all her neighbors play 0. A natural application of the model is one in which the action 1 is the purchase of a good, which is locally a public good, in the sense that it will be available also to neighbors. This game typically exhibits a great multiplicity of equilibria. Imagine a social planner whose scope is to find an optimal equilibrium, i.e. one in which the number of nodes playing 1 is minimal. To find such an equilibrium is a very hard task for any non-trivial network architecture. We propose an implementable mechanism that, in the limit of infinite time, reaches an optimal equilibrium, even if this equilibrium and even the network structure is unknown to the social planner.
Equilibrium computation in markets usually considers settings where player valuation functions are known. We consider the setting where player valuations are unknown; using a PAC learning-theoretic framework, we analyze some classes of common valuation functions, and provide algorithms which output direct PAC equilibrium allocations, not estimates based on attempting to learn valuation functions. Since there exist trivial PAC market outcomes with an unbounded worst-case efficiency loss, we lower-bound the efficiency of our algorithms. While the efficiency loss under general distributions is rather high, we show that in some cases (e.g., unit-demand valuations), it is possible to find a PAC market equilibrium with significantly better utility.
While game theory has been transformative for decision-making, the assumptions made can be overly restrictive in certain instances. In this work, we focus on some of the assumptions underlying rationality such as mutual consistency and best-response, and consider ways to relax these assumptions using concepts from level-$k$ reasoning and quantal response equilibrium (QRE) respectively. Specifically, we provide an information-theoretic two-parameter model that can relax both mutual consistency and best-response, but can recover approximations of level-$k$, QRE, or typical Nash equilibrium behaviour in the limiting cases. The proposed approach is based on a recursive form of the variational free energy principle, representing self-referential games as (pseudo) sequential decisions. Bounds in player processing abilities are captured as information costs, where future chains of reasoning are discounted, implying a hierarchy of players where lower-level players have fewer processing resources.