ترغب بنشر مسار تعليمي؟ اضغط هنا

Social Choice with Non Quasi-linear Utilities

95   0   0.0 ( 0 )
 نشر من قبل Hongyao Ma
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Without monetary payments, the Gibbard-Satterthwaite theorem proves that under mild requirements all truthful social choice mechanisms must be dictatorships. When payments are allowed, the Vickrey-Clarke-Groves (VCG) mechanism implements the value-maximizing choice, and has many other good properties: it is strategy-proof, onto, deterministic, individually rational, and does not make positive transfers to the agents. By Roberts theorem, with three or more alternatives, the weighted VCG mechanisms are essentially unique for domains with quasi-linear utilities. The goal of this paper is to characterize domains of non-quasi-linear utilities where reasonable mechanisms (with VCG-like properties) exist. Our main result is a tight characterization of the maximal non quasi-linear utility domain, which we call the largest parallel domain. We extend Roberts theorem to parallel domains, and use the generalized theorem to prove two impossibility results. First, any reasonable mechanism must be dictatorial when the utility domain is quasi-linear together with any single non-parallel type. Second, for richer utility domains that still differ very slightly from quasi-linearity, every strategy-proof, onto and deterministic mechanism must be a dictatorship.



قيم البحث

اقرأ أيضاً

Recently Cole and Gkatzelis gave the first constant factor approximation algorithm for the problem of allocating indivisible items to agents, under additive valuations, so as to maximize the Nash Social Welfare. We give constant factor algorithms for a substantial generalization of their problem -- to the case of separable, piecewise-linear concave utility functions. We give two such algorithms, the first using market equilibria and the second using the theory of stable polynomials. In AGT, there is a paucity of methods for the design of mechanisms for the allocation of indivisible goods and the result of Cole and Gkatzelis seemed to be taking a major step towards filling this gap. Our result can be seen as another step in this direction.
Mechanisms with money are commonly designed under the assumption that agents are quasi-linear, meaning they have linear disutility for spending money. We study the implications when agents with non-linear (specifically, convex) disutility for payment s participate in mechanisms designed for quasi-linear agents. We first show that any mechanism that is truthful for quasi-linear buyers has a simple best response function for buyers with non-linear disutility from payments, in which each bidder simply scales down her value for each potential outcome by a fixed factor, equal to her target return on investment (ROI). We call such a strategy ROI-optimal. We prove the existence of a Nash equilibrium in which agents use ROI-optimal strategies for a general class of allocation problems. Motivated by online marketplaces, we then focus on simultaneous second-price auctions for additive bidders and show that all ROI-optimal equilibria in this setting achieve constant-factor approximations to suitable welfare and revenue benchmarks.
In large scale collective decision making, social choice is a normative study of how one ought to design a protocol for reaching consensus. However, in instances where the underlying decision space is too large or complex for ordinal voting, standard voting methods of social choice may be impractical. How then can we design a mechanism - preferably decentralized, simple, scalable, and not requiring any special knowledge of the decision space - to reach consensus? We propose sequential deliberation as a natural solution to this problem. In this iterative method, successive pairs of agents bargain over the decision space using the previous decision as a disagreement alternative. We describe the general method and analyze the quality of its outcome when the space of preferences define a median graph. We show that sequential deliberation finds a 1.208- approximation to the optimal social cost on such graphs, coming very close to this value with only a small constant number of agents sampled from the population. We also show lower bounds on simpler classes of mechanisms to justify our design choices. We further show that sequential deliberation is ex-post Pareto efficient and has truthful reporting as an equilibrium of the induced extensive form game. We finally show that for general metric spaces, the second moment of of the distribution of social cost of the outcomes produced by sequential deliberation is also bounded.
We study the equilibrium computation problem in the Fisher market model with constrained piecewise linear concave (PLC) utilities. This general class captures many well-studied special cases, including markets with PLC utilities, markets with satiati on, and matching markets. For the special case of PLC utilities, although the problem is PPAD-hard, Devanur and Kannan (FOCS 2008) gave a polynomial-time algorithm when the number of items is constant. Our main result is a fixed parameter approximation scheme for computing an approximate equilibrium, where the parameters are the number of agents and the approximation accuracy. This provides an answer to an open question by Devanur and Kannan for PLC utilities, and gives a simpler and faster algorithm for matching markets as the one by Alaei, Jalaly and Tardos (EC 2017). The main technical idea is to work with the stronger concept of thrifty equilibria, and approximating the input utility functions by `robust utilities that have favorable marginal properties. With some restrictions, the results also extend to the Arrow--Debreu exchange market model.
One way of evaluating social choice (voting) rules is through a utilitarian distortion framework. In this model, we assume that agents submit full rankings over the alternatives, and these rankings are generated from underlying, but unknown, quantita tive costs. The emph{distortion} of a social choice rule is then the ratio of the total social cost of the chosen alternative to the optimal social cost of any alternative; since the true costs are unknown, we consider the worst-case distortion over all possible underlying costs. Analogously, we can consider the worst-case emph{fairness ratio} of a social choice rule by comparing a useful notion of fairness (based on approximate majorization) for the chosen alternative to that of the optimal alternative. With an additional metric assumption -- that the costs equal the agent-alternative distances in some metric space -- it is known that the Copeland rule achieves both a distortion and fairness ratio of at most 5. For other rules, only bounds on the distortion are known, e.g., the popular Single Transferable Vote (STV) rule has distortion $O(log m)$, where $m$ is the number of alternatives. We prove that the distinct notions of distortion and fairness ratio are in fact closely linked -- within an additive factor of 2 for any voting rule -- and thus STV also achieves an $O(log m)$ fairness ratio. We further extend the notions of distortion and fairness ratio to social choice rules choosing a emph{set} of alternatives. By relating the distortion of single-winner rules to multiple-winner rules, we establish that Recursive Copeland achieves a distortion of 5 and a fairness ratio of at most 7 for choosing a set of alternatives.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا