ترغب بنشر مسار تعليمي؟ اضغط هنا

Representative Committees of Peers

76   0   0.0 ( 0 )
 نشر من قبل Fedor Sandomirskiy
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A population of voters must elect representatives among themselves to decide on a sequence of possibly unforeseen binary issues. Voters care only about the final decision, not the elected representatives. The disutility of a voter is proportional to the fraction of issues, where his preferences disagree with the decision. While an issue-by-issue vote by all voters would maximize social welfare, we are interested in how well the preferences of the population can be approximated by a small committee. We show that a k-sortition (a random committee of k voters with the majority vote within the committee) leads to an outcome within the factor 1+O(1/k) of the optimal social cost for any number of voters n, any number of issues $m$, and any preference profile. For a small number of issues m, the social cost can be made even closer to optimal by delegation procedures that weigh committee members according to their number of followers. However, for large m, we demonstrate that the k-sortition is the worst-case optimal rule within a broad family of committee-based rules that take into account metric information about the preference profile of the whole population.



قيم البحث

اقرأ أيضاً

The spread of disinformation on social media platforms such as Facebook is harmful to society. This harm can take the form of a gradual degradation of public discourse; but it can also take the form of sudden dramatic events such as the recent insurr ection on Capitol Hill. The platforms themselves are in the best position to prevent the spread of disinformation, as they have the best access to relevant data and the expertise to use it. However, filtering disinformation is costly, not only for implementing filtering algorithms or employing manual filtering effort, but also because removing such highly viral content impacts user growth and thus potential advertising revenue. Since the costs of harmful content are borne by other entities, the platform will therefore have no incentive to filter at a socially-optimal level. This problem is similar to the problem of environmental regulation, in which the costs of adverse events are not directly borne by a firm, the mitigation effort of a firm is not observable, and the causal link between a harmful consequence and a specific failure is difficult to prove. In the environmental regulation domain, one solution to this issue is to perform costly monitoring to ensure that the firm takes adequate precautions according a specified rule. However, classifying disinformation is performative, and thus a fixed rule becomes less effective over time. Encoding our domain as a Markov decision process, we demonstrate that no penalty based on a static rule, no matter how large, can incentivize adequate filtering by the platform. Penalties based on an adaptive rule can incentivize optimal effort, but counterintuitively, only if the regulator sufficiently overreacts to harmful events by requiring a greater-than-optimal level of filtering.
In the context of computational social choice, we study voting methods that assign a set of winners to each profile of voter preferences. A voting method satisfies the property of positive involvement (PI) if for any election in which a candidate x w ould be among the winners, adding another voter to the election who ranks x first does not cause x to lose. Surprisingly, a number of standard voting methods violate this natural property. In this paper, we investigate different ways of measuring the extent to which a voting method violates PI, using computer simulations. We consider the probability (under different probability models for preferences) of PI violations in randomly drawn profiles vs. profile-coalition pairs (involving coalitions of different sizes). We argue that in order to choose between a voting method that satisfies PI and one that does not, we should consider the probability of PI violation conditional on the voting methods choosing different winners. We should also relativize the probability of PI violation to what we call voter potency, the probability that a voter causes a candidate to lose. Although absolute frequencies of PI violations may be low, after this conditioning and relativization, we see that under certain voting methods that violate PI, much of a voters potency is turned against them - in particular, against their desire to see their favorite candidate elected.
Supply chains are the backbone of the global economy. Disruptions to them can be costly. Centrally managed supply chains invest in ensuring their resilience. Decentralized supply chains, however, must rely upon the self-interest of their individual c omponents to maintain the resilience of the entire chain. We examine the incentives that independent self-interested agents have in forming a resilient supply chain network in the face of production disruptions and competition. In our model, competing suppliers are subject to yield uncertainty (they deliver less than ordered) and congestion (lead time uncertainty or, soft supply caps). Competing retailers must decide which suppliers to link to based on both price and reliability. In the presence of yield uncertainty only, the resulting supply chain networks are sparse. Retailers concentrate their links on a single supplier, counter to the idea that they should mitigate yield uncertainty by diversifying their supply base. This happens because retailers benefit from supply variance. It suggests that competition will amplify output uncertainty. When congestion is included as well, the resulting networks are denser and resemble the bipartite expander graphs that have been proposed in the supply chain literature, thereby, providing the first example of endogenous formation of resilient supply chain networks, without resilience being explicitly encoded in payoffs. Finally, we show that a suppliers investments in improved yield can make it worse off. This happens because high production output saturates the market, which, in turn lowers prices and profits for participants.
Most online platforms strive to learn from interactions with users, and many engage in exploration: making potentially suboptimal choices for the sake of acquiring new information. We study the interplay between exploration and competition: how such platforms balance the exploration for learning and the competition for users. Here users play three distinct roles: they are customers that generate revenue, they are sources of data for learning, and they are self-interested agents which choose among the competing platforms. We consider a stylized duopoly model in which two firms face the same multi-armed bandit problem. Users arrive one by one and choose between the two firms, so that each firm makes progress on its bandit problem only if it is chosen. Through a mix of theoretical results and numerical simulations, we study whether and to what extent competition incentivizes the adoption of better bandit algorithms, and whether it leads to welfare increases for users. We find that stark competition induces firms to commit to a greedy bandit algorithm that leads to low welfare. However, weakening competition by providing firms with some free users incentivizes better exploration strategies and increases welfare. We investigate two channels for weakening the competition: relaxing the rationality of users and giving one firm a first-mover advantage. Our findings are closely related to the competition vs. innovation relationship, and elucidate the first-mover advantage in the digital economy.
In 1979, Hylland and Zeckhauser cite{hylland} gave a simple and general scheme for implementing a one-sided matching market using the power of a pricing mechanism. Their method has nice properties -- it is incentive compatible in the large and produc es an allocation that is Pareto optimal -- and hence it provides an attractive, off-the-shelf method for running an application involving such a market. With matching markets becoming ever more prevalant and impactful, it is imperative to finally settle the computational complexity of this scheme. We present the following partial resolution: 1. A combinatorial, strongly polynomial time algorithm for the special case of $0/1$ utilities. 2. An example that has only irrational equilibria, hence proving that this problem is not in PPAD. Furthermore, its equilibria are disconnected, hence showing that the problem does not admit a convex programming formulation. 3. A proof of membership of the problem in the class FIXP. We leave open the (difficult) question of determining if the problem is FIXP-hard. Settling the status of the special case when utilities are in the set ${0, {frac 1 2}, 1 }$ appears to be even more difficult.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا