Do you want to publish a course? Click here

Optimal incentives for collective intelligence

78   0   0.0 ( 0 )
 Added by Richard Mann
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Collective intelligence is the ability of a group to perform more effectively than any individual alone. Diversity among group members is a key condition for the emergence of collective intelligence, but maintaining diversity is challenging in the face of social pressure to imitate ones peers. We investigate the role incentives play in maintaining useful diversity through an evolutionary game-theoretic model of collective prediction. We show that market-based incentive systems produce herding effects, reduce information available to the group and suppress collective intelligence. In response, we propose a new incentive scheme that rewards accurate minority predictions, and show that this produces optimal diversity and collective predictive accuracy. We conclude that real-world systems should reward those who have demonstrated accuracy when majority opinion has been in error.



rate research

Read More

We consider settings in which we wish to incentivize myopic agents (such as Airbnb landlords, who may emphasize short-term profits and property safety) to treat arriving clients fairly, in order to prevent overall discrimination against individuals or groups. We model such settings in both classical and contextual bandit models in which the myopic agents maximize rewards according to current empirical averages, but are also amenable to exogenous payments that may cause them to alter their choices. Our notion of fairness asks that more qualified individuals are never (probabilistically) preferred over less qualified ones [Joseph et al]. We investigate whether it is possible to design inexpensive {subsidy} or payment schemes for a principal to motivate myopic agents to play fairly in all or almost all rounds. When the principal has full information about the state of the myopic agents, we show it is possible to induce fair play on every round with a subsidy scheme of total cost $o(T)$ (for the classic setting with $k$ arms, $tilde{O}(sqrt{k^3T})$, and for the $d$-dimensional linear contextual setting $tilde{O}(dsqrt{k^3 T})$). If the principal has much more limited information (as might often be the case for an external regulator or watchdog), and only observes the number of rounds in which members from each of the $k$ groups were selected, but not the empirical estimates maintained by the myopic agent, the design of such a scheme becomes more complex. We show both positive and negative results in the classic and linear bandit settings by upper and lower bounding the cost of fair subsidy schemes.
Data driven segmentation is the powerhouse behind the success of online advertising. Various underlying challenges for successful segmentation have been studied by the academic community, with one notable exception - consumers incentives have been typically ignored. This lacuna is troubling as consumers have much control over the data being collected. Missing or manipulated data could lead to inferior segmentation. The current work proposes a model of prior-free segmentation, inspired by models of facility location, and to the best of our knowledge provides the first segmentation mechanism that addresses incentive compatibility, efficient market segmentation and privacy in the absence of a common prior.
Motivated in part by online marketplaces such as ridesharing and freelancing platforms, we study two-sided matching markets where agents are heterogeneous in their compatibility with different types of jobs: flexible agents can fulfill any job, whereas each specialized agent can only be matched to a specific subset of jobs. When the set of jobs compatible with each agent is known, the full-information first-best throughput (i.e. number of matches) can be achieved by prioritizing dispatch of specialized agents as much as possible. When agents are strategic, however, we show that such aggressive reservation of flexible capacity incentivizes flexible agents to pretend to be specialized. The resulting equilibrium throughput could be even lower than the outcome under a baseline policy, which does not reserve flexible capacity, and simply dispatches jobs to agents at random. To balance matching efficiency with agents strategic considerations, we introduce a novel robust capacity reservation policy (RCR). The RCR policy retains a similar structure to the first best policy, but offers additional and seemingly incompatible edges along which jobs can be dispatched. We show a Braess paradox-like result, that offering these additional edges could sometimes lead to worse equilibrium outcomes. Nevertheless, we prove that under any market conditions, and regardless of agents strategies, the proposed RCR policy always achieves higher throughput than the baseline policy. Our work highlights the importance of considering the interplay between strategic behavior and capacity allocation policies in service systems.
158 - Moshe Babaioff , Sigal Oren 2018
We study a variant of Vickreys classic bottleneck model. In our model there are $n$ agents and each agent strategically chooses when to join a first-come-first-served observable queue. Agents dislike standing in line and they take actions in discrete time steps: we assume that each agent has a cost of $1$ for every time step he waits before joining the queue and a cost of $w>1$ for every time step he waits in the queue. At each time step a single agent can be processed. Before each time step, every agent observes the queue and strategically decides whether or not to join, with the goal of minimizing his expected cost. In this paper we focus on symmetric strategies which are arguably more natural as they require less coordination. This brings up the following twist to the usual price of anarchy question: what is the main source for the inefficiency of symmetric equilibria? is it the players strategic behavior or the lack of coordination? We present results for two different parameter regimes that are qualitatively very different: (i) when $w$ is fixed and $n$ grows, we prove a tight bound of $2$ and show that the entire loss is due to the players selfish behavior (ii) when $n$ is fixed and $w$ grows, we prove a tight bound of $Theta left(sqrt{frac{w}{n}}right)$ and show that it is mainly due to lack of coordination: the same order of magnitude of loss is suffered by any symmetric profile.
Social dilemmas exist in various fields and give rise to the so-called free-riding problem, leading to collective fiascos. The difficulty of tracking individual behaviors makes egoistic incentives in large-scale systems a challenging task. However, the state-of-the-art mechanisms are either individual-based or state-dependent, resulting in low efficiency in large-scale networks. In this paper, we propose an egoistic incentive mechanism from a connected (network) perspective rather than an isolated (individual) perspective by taking advantage of the social nature of people. We make use of a zero-determinant (ZD) strategy for rewarding cooperation and sanctioning defection. After proving cooperation is the dominant strategy for ZD players, we optimize their deployment to facilitate cooperation over the whole system. To further speed up cooperation, we derive a ZD alliance strategy for sequential multiple-player repeated games to empower ZD players with higher controllable leverage, which undoubtedly enriches the theoretical system of ZD strategies and broadens their application domain. Our approach is stateless and stable, which contributes to its scalability. Extensive simulations based on a real world trace data as well as synthetic data demonstrate the effectiveness of our proposed egoistic incentive approach under different networking scenarios.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا