Do you want to publish a course? Click here

Optimal Algorithms for Multiwinner Elections and the Chamberlin-Courant Rule

89   0   0.0 ( 0 )
 Added by Kangning Wang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We consider the algorithmic question of choosing a subset of candidates of a given size $k$ from a set of $m$ candidates, with knowledge of voters ordinal rankings over all candidates. We consider the well-known and classic scoring rule for achieving diverse representation: the Chamberlin-Courant (CC) or $1$-Borda rule, where the score of a committee is the average over the voters, of the rank of the best candidate in the committee for that voter; and its generalization to the average of the top $s$ best candidates, called the $s$-Borda rule. Our first result is an improved analysis of the natural and well-studied greedy heuristic. We show that greedy achieves a $left(1 - frac{2}{k+1}right)$-approximation to the maximization (or satisfaction) version of CC rule, and a $left(1 - frac{2s}{k+1}right)$-approximation to the $s$-Borda score. Our result improves on the best known approximation algorithm for this problem. We show that these bounds are almost tight. For the dissatisfaction (or minimization) version of the problem, we show that the score of $frac{m+1}{k+1}$ can be viewed as an optimal benchmark for the CC rule, as it is essentially the best achievable score of any polynomial-time algorithm even when the optimal score is a polynomial factor smaller (under standard computational complexity assumptions). We show that another well-studied algorithm for this problem, called the Banzhaf rule, attains this benchmark. We finally show that for the $s$-Borda rule, when the optimal value is small, these algorithms can be improved by a factor of $tilde Omega(sqrt{s})$ via LP rounding. Our upper and lower bounds are a significant improvement over previous results, and taken together, not only enable us to perform a finer comparison of greedy algorithms for these problems, but also provide analytic justification for using such algorithms in practice.



rate research

Read More

We study online pricing algorithms for the Bayesian selection problem with production constraints and its generalization to the laminar matroid Bayesian online selection problem. Consider a firm producing (or receiving) multiple copies of different product types over time. The firm can offer the products to arriving buyers, where each buyer is interested in one product type and has a private valuation drawn independently from a possibly different but known distribution. Our goal is to find an adaptive pricing for serving the buyers that maximizes the expected social-welfare (or revenue) subject to two constraints. First, at any time the total number of sold items of each type is no more than the number of produced items. Second, the total number of sold items does not exceed the total shipping capacity. This problem is a special case of the well-known matroid Bayesian online selection problem studied in [Kleinberg and Weinberg, 2012], when the underlying matroid is laminar. We give the first Polynomial-Time Approximation Scheme (PTAS) for the above problem as well as its generalization to the laminar matroid Bayesian online selection problem when the depth of the laminar family is bounded by a constant. Our approach is based on rounding the solution of a hierarchy of linear programming relaxations that systematically strengthen the commonly used ex-ante linear programming formulation of these problems and approximate the optimum online solution with any degree of accuracy. Our rounding algorithm respects the relaxed constraints of higher-levels of the laminar tree only in expectation, and exploits the negative dependency of the selection rule of lower-levels to achieve the required concentration that guarantees the feasibility with high probability.
We study two notions of stability in multiwinner elections that are based on the Condorcet criterion. The first notion was introduced by Gehrlein: A committee is stable if each committee member is preferred to each non-member by a (possibly weak) majority of voters. The second notion is called local stability (introduced in this paper): A size-$k$ committee is locally stable in an election with $n$ voters if there is no candidate $c$ and no group of more than $frac{n}{k+1}$ voters such that each voter in this group prefers $c$ to each committee member. We argue that Gehrlein-stable committees are appropriate for shortlisting tasks, and that locally stable committees are better suited for applications that require proportional representation. The goal of this paper is to analyze these notions in detail, explore their compatibility with notions of proportionality, and investigate the computational complexity of related algorithmic tasks.
A patient seller aims to sell a good to an impatient buyer (i.e., one who discounts utility over time). The buyer will remain in the market for a period of time $T$, and her private value is drawn from a publicly known distribution. What is the revenue-optimal pricing-curve (sequence of (price, time) pairs) for the seller? Is randomization of help here? Is the revenue-optimal pricing-curve computable in polynomial time? We answer these questions in this paper. We give an efficient algorithm for computing the revenue-optimal pricing curve. We show that pricing curves, that post a price at each point of time and let the buyer pick her utility maximizing time to buy, are revenue-optimal among a much broader class of sequential lottery mechanisms: namely, mechanisms that allow the seller to post a menu of lotteries at each point of time cannot get any higher revenue than pricing curves. We also show that the even broader class of mechanisms that allow the menu of lotteries to be adaptively set, can earn strictly higher revenue than that of pricing curves, and the revenue gap can be as big as the support size of the buyers value distribution.
A preference profile is single-peaked on a tree if the candidate set can be equipped with a tree structure so that the preferences of each voter are decreasing from their top candidate along all paths in the tree. This notion was introduced by Demange (1982), and subsequently Trick (1989) described an efficient algorithm for deciding if a given profile is single-peaked on a tree. We study the complexity of multiwinner elections under several variants of the Chamberlin-Courant rule for preferences single-peaked on trees. We show that the egalitarian version of this problem admits a polynomial-time algorithm. For the utilitarian version, we prove that winner determination remains NP-hard, even for the Borda scoring function; however, a winning committee can be found in polynomial time if either the number of leaves or the number of internal vertices of the underlying tree is bounded by a constant. To benefit from these positive results, we need a procedure that can determine whether a given profile is single-peaked on a tree that has additional desirable properties (such as, e.g., a small number of leaves). To address this challenge, we develop a structural approach that enables us to compactly represent all trees with respect to which a given profile is single-peaked. We show how to use this representation to efficiently find the best tree for a given profile for use with our winner determination algorithms: Given a profile, we can efficiently find a tree with the minimum number of leaves, or a tree with the minimum number of internal vertices among trees on which the profile is single-peaked. We also consider several other optimization criteria for trees: for some we obtain polynomial-time algorithms, while for others we show NP-hardness results.
The probabilistic serial (PS) rule is one of the most prominent randomized rules for the assignment problem. It is well-known for its superior fairness and welfare properties. However, PS is not immune to manipulative behaviour by the agents. We examine computational and non-computational aspects of strategising under the PS rule. Firstly, we study the computational complexity of an agent manipulating the PS rule. We present polynomial-time algorithms for optimal manipulation. Secondly, we show that expected utility best responses can cycle. Thirdly, we examine the existence and computation of Nash equilibrium profiles under the PS rule. We show that a pure Nash equilibrium is guaranteed to exist under the PS rule. For two agents, we identify two different types of preference profiles that are not only in Nash equilibrium but can also be computed in linear time. Finally, we conduct experiments to check the frequency of manipulability of the PS rule under different combinations of the number of agents, objects, and utility functions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا