Do you want to publish a course? Click here

Optimal Prizes for All-Pay Contests in Heterogeneous Crowdsourcing

71   0   0.0 ( 0 )
 Added by Tony T. Luo
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Incentives are key to the success of crowdsourcing which heavily depends on the level of user participation. This paper designs an incentive mechanism to motivate a heterogeneous crowd of users to actively participate in crowdsourcing campaigns. We cast the problem in a new, asymmetric all-pay contest model with incomplete information, where an arbitrary n of users exert irrevocable effort to compete for a prize tuple. The prize tuple is an array of prize functions as opposed to a single constant prize typically used by conventional contests. We design an optimal contest that (a) induces the maximum profit---total user effort minus the prize payout---for the crowdsourcer, and (b) ensures users to strictly have the incentive to participate. In stark contrast to intuition and prior related work, our mechanism induces an equilibrium in which heterogeneous users behave independently of one another as if they were in a homogeneous setting. This newly discovered property, which we coin as strategy autonomy (SA), is of practical significance: it (a) reduces computational and storage complexity by n-fold for each user, (b) increases the crowdsourcers revenue by counteracting an effort reservation effect existing in asymmetric contests, and (c) neutralizes the (almost universal) law of diminishing marginal returns (DMR). Through an extensive numerical case study, we demonstrate and scrutinize the superior profitability of our mechanism, as well as draw insights into the SA property.



rate research

Read More

We consider the principal-agent problem with heterogeneous agents. Previous works assume that the principal signs independent incentive contracts with every agent to make them invest more efforts on the tasks. However, in many circumstances, these contracts need to be identical for the sake of fairness. We investigate the optimal common contract problem. To our knowledge, this is the first attempt to consider this natural and important generalization. We first show this problem is NP-complete. Then we provide a dynamic programming algorithm to compute the optimal contract in $O(n^2m)$ time, where $n,m$ are the number of agents and actions, under the assumption that the agents cost functions obey increasing difference property. At last, we generalize the setting such that each agent can choose to directly produce a reward in $[0,1]$. We provide an $O(log n)$-approximate algorithm for this generalization.
183 - T. Luo , S. S. Kanhere , H-P. Tan 2017
Incentive mechanisms for crowdsourcing have been extensively studied under the framework of all-pay auctions. Along a distinct line, this paper proposes to use Tullock contests as an alternative tool to design incentive mechanisms for crowdsourcing. We are inspired by the conduciveness of Tullock contests to attracting user entry (yet not necessarily a higher revenue) in other domains. In this paper, we explore a new dimension in optimal Tullock contest design, by superseding the contest prize---which is fixed in conventional Tullock contests---with a prize function that is dependent on the (unknown) winners contribution, in order to maximize the crowdsourcers utility. We show that this approach leads to attractive practical advantages: (a) it is well-suited for rapid prototyping in fully distributed web agents and smartphone apps; (b) it overcomes the disincentive to participate caused by players antagonism to an increasing number of rivals. Furthermore, we optimize conventional, fixed-prize Tullock contests to construct the most superior benchmark to compare against our mechanism. Through extensive evaluations, we show that our mechanism significantly outperforms the optimal benchmark, by over three folds on the crowdsourcers utility cum profit and up to nine folds on the players social welfare.
Incentives are more likely to elicit desired outcomes when they are designed based on accurate models of agents strategic behavior. A growing literature, however, suggests that people do not quite behave like standard economic agents in a variety of environments, both online and offline. What consequences might such differences have for the optimal design of mechanisms in these environments? In this paper, we explore this question in the context of optimal contest design for simple agents---agents who strategically reason about whether or not to participate in a system, but not about the input they provide to it. Specifically, consider a contest where $n$ potential contestants with types $(q_i,c_i)$ each choose between participating and producing a submission of quality $q_i$ at cost $c_i$, versus not participating at all, to maximize their utilities. How should a principal distribute a total prize $V$ amongst the $n$ ranks to maximize some increasing function of the qualities of elicited submissions in a contest with such simple agents? We first solve the optimal contest design problem for settings with homogenous participation costs $c_i = c$. Here, the optimal contest is always a simple contest, awarding equal prizes to the top $j^*$ contestants for a suitable choice of $j^*$. (In comparable models with strategic effort choices, the optimal contest is either a winner-take-all contest or awards possibly unequal prizes, depending on the curvature of agents effort cost functions.) We next address the general case with heterogeneous costs where agents types are inherently two-dimensional, significantly complicating equilibrium analysis. Our main result here is that the winner-take-all contest is a 3-approximation of the optimal contest when the principals objective is to maximize the quality of the best elicited contribution.
Modern machine learning is migrating to the era of complex models, which requires a plethora of well-annotated data. While crowdsourcing is a promising tool to achieve this goal, existing crowdsourcing approaches barely acquire a sufficient amount of high-quality labels. In this paper, motivated by the Guess-with-Hints answer strategy from the Millionaire game show, we introduce the hint-guided approach into crowdsourcing to deal with this challenge. Our approach encourages workers to get help from hints when they are unsure of questions. Specifically, we propose a hybrid-stage setting, consisting of the main stage and the hint stage. When workers face any uncertain question on the main stage, they are allowed to enter the hint stage and look up hints before making any answer. A unique payment mechanism that meets two important design principles for crowdsourcing is developed. Besides, the proposed mechanism further encourages high-quality workers less using hints, which helps identify and assigns larger possible payment to them. Experiments are performed on Amazon Mechanical Turk, which show that our approach ensures a sufficient number of high-quality labels with low expenditure and detects high-quality workers.
In an auction each party bids a certain amount and the one which bids the highest is the winner. Interestingly, auctions can also be used as models for other real-world systems. In an all pay auction all parties must pay a forfeit for bidding. In the most commonly studied all pay auction, parties forfeit their entire bid, and this has been considered as a model for expenditure on political campaigns. Here we consider a number of alternative forfeits which might be used as models for different real-world competitions, such as preparing bids for defense or infrastructure contracts.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا