ترغب بنشر مسار تعليمي؟ اضغط هنا

Concentration of Distortion: The Value of Extra Voters in Randomized Social Choice

274   0   0.0 ( 0 )
 نشر من قبل Brandon Fain
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study higher statistical moments of Distortion for randomized social choice in a metric implicit utilitarian model. The Distortion of a social choice mechanism is the expected approximation factor with respect to the optimal utilitarian social cost (OPT). The $k^{th}$ moment of Distortion is the expected approximation factor with respect to the $k^{th}$ power of OPT. We consider mechanisms that elicit alternatives by randomly sampling voters for their favorite alternative. We design two families of mechanisms that provide constant (with respect to the number of voters and alternatives) $k^{th}$ moment of Distortion using just $k$ samples if all voters can then participate in a vote among the proposed alternatives, or $2k-1$ samples if only the sampled voters can participate. We also show that these numbers of samples are tight. Such mechanisms deviate from a constant approximation to OPT with probability that drops exponentially in the number of samples, independent of the total number of voters and alternatives. We conclude with simulations on real-world Participatory Budgeting data to qualitatively complement our theoretical insights.



قيم البحث

اقرأ أيضاً

One way of evaluating social choice (voting) rules is through a utilitarian distortion framework. In this model, we assume that agents submit full rankings over the alternatives, and these rankings are generated from underlying, but unknown, quantita tive costs. The emph{distortion} of a social choice rule is then the ratio of the total social cost of the chosen alternative to the optimal social cost of any alternative; since the true costs are unknown, we consider the worst-case distortion over all possible underlying costs. Analogously, we can consider the worst-case emph{fairness ratio} of a social choice rule by comparing a useful notion of fairness (based on approximate majorization) for the chosen alternative to that of the optimal alternative. With an additional metric assumption -- that the costs equal the agent-alternative distances in some metric space -- it is known that the Copeland rule achieves both a distortion and fairness ratio of at most 5. For other rules, only bounds on the distortion are known, e.g., the popular Single Transferable Vote (STV) rule has distortion $O(log m)$, where $m$ is the number of alternatives. We prove that the distinct notions of distortion and fairness ratio are in fact closely linked -- within an additive factor of 2 for any voting rule -- and thus STV also achieves an $O(log m)$ fairness ratio. We further extend the notions of distortion and fairness ratio to social choice rules choosing a emph{set} of alternatives. By relating the distortion of single-winner rules to multiple-winner rules, we establish that Recursive Copeland achieves a distortion of 5 and a fairness ratio of at most 7 for choosing a set of alternatives.
We study social choice rules under the utilitarian distortion framework, with an additional metric assumption on the agents costs over the alternatives. In this approach, these costs are given by an underlying metric on the set of all agents plus alt ernatives. Social choice rules have access to only the ordinal preferences of agents but not the latent cardinal costs that induce them. Distortion is then defined as the ratio between the social cost (typically the sum of agent costs) of the alternative chosen by the mechanism at hand, and that of the optimal alternative chosen by an omniscient algorithm. The worst-case distortion of a social choice rule is, therefore, a measure of how close it always gets to the optimal alternative without any knowledge of the underlying costs. Under this model, it has been conjectured that Ranked Pairs, the well-known weighted-tournament rule, achieves a distortion of at most 3 [Anshelevich et al. 2015]. We disprove this conjecture by constructing a sequence of instances which shows that the worst-case distortion of Ranked Pairs is at least 5. Our lower bound on the worst case distortion of Ranked Pairs matches a previously known upper bound for the Copeland rule, proving that in the worst case, the simpler Copeland rule is at least as good as Ranked Pairs. And as long as we are limited to (weighted or unweighted) tournament rules, we demonstrate that randomization cannot help achieve an expected worst-case distortion of less than 3. Using the concept of approximate majorization within the distortion framework, we prove that Copeland and Randomized Dictatorship achieve low constant factor fairness-ratios (5 and 3 respectively), which is a considerable generalization of similar results for the sum of costs and single largest cost objectives. In addition to all of the above, we outline several interesting directions for further research in this space.
In large scale collective decision making, social choice is a normative study of how one ought to design a protocol for reaching consensus. However, in instances where the underlying decision space is too large or complex for ordinal voting, standard voting methods of social choice may be impractical. How then can we design a mechanism - preferably decentralized, simple, scalable, and not requiring any special knowledge of the decision space - to reach consensus? We propose sequential deliberation as a natural solution to this problem. In this iterative method, successive pairs of agents bargain over the decision space using the previous decision as a disagreement alternative. We describe the general method and analyze the quality of its outcome when the space of preferences define a median graph. We show that sequential deliberation finds a 1.208- approximation to the optimal social cost on such graphs, coming very close to this value with only a small constant number of agents sampled from the population. We also show lower bounds on simpler classes of mechanisms to justify our design choices. We further show that sequential deliberation is ex-post Pareto efficient and has truthful reporting as an equilibrium of the induced extensive form game. We finally show that for general metric spaces, the second moment of of the distribution of social cost of the outcomes produced by sequential deliberation is also bounded.
305 - Utsav Sadana , Erick Delage 2020
Conditional Value at Risk (CVaR) is widely used to account for the preferences of a risk-averse agent in the extreme loss scenarios. To study the effectiveness of randomization in interdiction games with an interdictor that is both risk and ambiguity averse, we introduce a distributionally robust network interdiction game where the interdictor randomizes over the feasible interdiction plans in order to minimize the worst-case CVaR of the flow with respect to both the unknown distribution of the capacity of the arcs and his mixed strategy over interdicted arcs. The flow player, on the contrary, maximizes the total flow in the network. By using the budgeted uncertainty set, we control the degree of conservatism in the model and reformulate the interdictors non-linear problem as a bi-convex optimization problem. For solving this problem to any given optimality level, we devise a spatial branch and bound algorithm that uses the McCormick inequalities and reduced reformulation linearization technique (RRLT) to obtain convex relaxation of the problem. We also develop a column generation algorithm to identify the optimal support of the convex relaxation which is then used in the coordinate descent algorithm to determine the upper bounds. The efficiency and convergence of the spatial branch and bound algorithm is established in the numerical experiments. Further, our numerical experiments show that randomized strategies can have significantly better in-sample and out-of-sample performance than optimal deterministic ones.
Without monetary payments, the Gibbard-Satterthwaite theorem proves that under mild requirements all truthful social choice mechanisms must be dictatorships. When payments are allowed, the Vickrey-Clarke-Groves (VCG) mechanism implements the value-ma ximizing choice, and has many other good properties: it is strategy-proof, onto, deterministic, individually rational, and does not make positive transfers to the agents. By Roberts theorem, with three or more alternatives, the weighted VCG mechanisms are essentially unique for domains with quasi-linear utilities. The goal of this paper is to characterize domains of non-quasi-linear utilities where reasonable mechanisms (with VCG-like properties) exist. Our main result is a tight characterization of the maximal non quasi-linear utility domain, which we call the largest parallel domain. We extend Roberts theorem to parallel domains, and use the generalized theorem to prove two impossibility results. First, any reasonable mechanism must be dictatorial when the utility domain is quasi-linear together with any single non-parallel type. Second, for richer utility domains that still differ very slightly from quasi-linearity, every strategy-proof, onto and deterministic mechanism must be a dictatorship.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا