Do you want to publish a course? Click here

Reasoning about Social Choice and Games in Monadic Fixed-Point Logic

275   0   0.0 ( 0 )
 Added by EPTCS
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Whether it be in normal form games, or in fair allocations, or in voter preferences in voting systems, a certain pattern of reasoning is common. From a particular profile, an agent or a group of agents may have an incentive to shift to a new one. This induces a natural graph structure that we call the improvement graph on the strategy space of these systems. We suggest that the monadic fixed-point logic with counting, an extension of monadic first-order logic on graphs with fixed-point and counting quantifiers, is a natural specification language on improvement graphs, and thus for a class of properties that can be interpreted across these domains. The logic has an efficient model checking algorithm (in the size of the improvement graph).



rate research

Read More

One of the natural objectives of the field of the social networks is to predict agents behaviour. To better understand the spread of various products through a social network arXiv:1105.2434 introduced a threshold model, in which the nodes influenced by their neighbours can adopt one out of several alternatives. To analyze the consequences of such product adoption we associate here with each such social network a natural strategic game between the agents. In these games the payoff of each player weakly increases when more players choose his strategy, which is exactly opposite to the congestion games. The possibility of not choosing any product results in two special types of (pure) Nash equilibria. We show that such games may have no Nash equilibrium and that determining an existence of a Nash equilibrium, also of a special type, is NP-complete. This implies the same result for a more general class of games, namely polymatrix games. The situation changes when the underlying graph of the social network is a DAG, a simple cycle, or, more generally, has no source nodes. For these three classes we determine the complexity of an existence of (a special type of) Nash equilibria. We also clarify for these categories of games the status and the complexity of the finite best response property (FBRP) and the finite improvement property (FIP). Further, we introduce a new property of the uniform FIP which is satisfied when the underlying graph is a simple cycle, but determining it is co-NP-hard in the general case and also when the underlying graph has no source nodes. The latter complexity results also hold for the property of being a weakly acyclic game. A preliminary version of this paper appeared as [19].
In large scale collective decision making, social choice is a normative study of how one ought to design a protocol for reaching consensus. However, in instances where the underlying decision space is too large or complex for ordinal voting, standard voting methods of social choice may be impractical. How then can we design a mechanism - preferably decentralized, simple, scalable, and not requiring any special knowledge of the decision space - to reach consensus? We propose sequential deliberation as a natural solution to this problem. In this iterative method, successive pairs of agents bargain over the decision space using the previous decision as a disagreement alternative. We describe the general method and analyze the quality of its outcome when the space of preferences define a median graph. We show that sequential deliberation finds a 1.208- approximation to the optimal social cost on such graphs, coming very close to this value with only a small constant number of agents sampled from the population. We also show lower bounds on simpler classes of mechanisms to justify our design choices. We further show that sequential deliberation is ex-post Pareto efficient and has truthful reporting as an equilibrium of the induced extensive form game. We finally show that for general metric spaces, the second moment of of the distribution of social cost of the outcomes produced by sequential deliberation is also bounded.
We consider agents in a social network competing to be selected as partners in collaborative, mutually beneficial activities. We study this through a model in which an agent i can initiate a limited number k_i>0 of games and selects the ideal partners from its one-hop neighborhood. On the flip side it can accept as many games offered from its neighbors. Each game signifies a productive joint economic activity, and players attempt to maximize their individual utilities. Unsurprisingly, more trustworthy agents are more desirable as partners. Trustworthiness is measured by the game theoretic concept of Limited-Trust, which quantifies the maximum cost an agent is willing to incur in order to improve the net utility of all agents. Agents learn about their neighbors trustworthiness through interactions and their behaviors evolve in response. Empirical trials performed on realistic social networks show that when given the option, many agents become highly trustworthy; most or all become highly trustworthy when knowledge of their neighbors trustworthiness is based on past interactions rather than known a priori. This trustworthiness is not the result of altruism, instead agents are intrinsically motivated to become trustworthy partners by competition. Two insights are presented: first, trustworthy behavior drives an increase in the utility of all agents, where maintaining a relatively modest level of trustworthiness may easily improve net utility by as much as 14.5%. If only one agent exhibits modest trust among self-centered ones, it can increase its average utility by up to 25% in certain cases! Second, and counter-intuitively, when partnership opportunities are abundant agents become less trustworthy.
One way of evaluating social choice (voting) rules is through a utilitarian distortion framework. In this model, we assume that agents submit full rankings over the alternatives, and these rankings are generated from underlying, but unknown, quantitative costs. The emph{distortion} of a social choice rule is then the ratio of the total social cost of the chosen alternative to the optimal social cost of any alternative; since the true costs are unknown, we consider the worst-case distortion over all possible underlying costs. Analogously, we can consider the worst-case emph{fairness ratio} of a social choice rule by comparing a useful notion of fairness (based on approximate majorization) for the chosen alternative to that of the optimal alternative. With an additional metric assumption -- that the costs equal the agent-alternative distances in some metric space -- it is known that the Copeland rule achieves both a distortion and fairness ratio of at most 5. For other rules, only bounds on the distortion are known, e.g., the popular Single Transferable Vote (STV) rule has distortion $O(log m)$, where $m$ is the number of alternatives. We prove that the distinct notions of distortion and fairness ratio are in fact closely linked -- within an additive factor of 2 for any voting rule -- and thus STV also achieves an $O(log m)$ fairness ratio. We further extend the notions of distortion and fairness ratio to social choice rules choosing a emph{set} of alternatives. By relating the distortion of single-winner rules to multiple-winner rules, we establish that Recursive Copeland achieves a distortion of 5 and a fairness ratio of at most 7 for choosing a set of alternatives.
Without monetary payments, the Gibbard-Satterthwaite theorem proves that under mild requirements all truthful social choice mechanisms must be dictatorships. When payments are allowed, the Vickrey-Clarke-Groves (VCG) mechanism implements the value-maximizing choice, and has many other good properties: it is strategy-proof, onto, deterministic, individually rational, and does not make positive transfers to the agents. By Roberts theorem, with three or more alternatives, the weighted VCG mechanisms are essentially unique for domains with quasi-linear utilities. The goal of this paper is to characterize domains of non-quasi-linear utilities where reasonable mechanisms (with VCG-like properties) exist. Our main result is a tight characterization of the maximal non quasi-linear utility domain, which we call the largest parallel domain. We extend Roberts theorem to parallel domains, and use the generalized theorem to prove two impossibility results. First, any reasonable mechanism must be dictatorial when the utility domain is quasi-linear together with any single non-parallel type. Second, for richer utility domains that still differ very slightly from quasi-linearity, every strategy-proof, onto and deterministic mechanism must be a dictatorship.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا