ترغب بنشر مسار تعليمي؟ اضغط هنا

Incentives are more likely to elicit desired outcomes when they are designed based on accurate models of agents strategic behavior. A growing literature, however, suggests that people do not quite behave like standard economic agents in a variety of environments, both online and offline. What consequences might such differences have for the optimal design of mechanisms in these environments? In this paper, we explore this question in the context of optimal contest design for simple agents---agents who strategically reason about whether or not to participate in a system, but not about the input they provide to it. Specifically, consider a contest where $n$ potential contestants with types $(q_i,c_i)$ each choose between participating and producing a submission of quality $q_i$ at cost $c_i$, versus not participating at all, to maximize their utilities. How should a principal distribute a total prize $V$ amongst the $n$ ranks to maximize some increasing function of the qualities of elicited submissions in a contest with such simple agents? We first solve the optimal contest design problem for settings with homogenous participation costs $c_i = c$. Here, the optimal contest is always a simple contest, awarding equal prizes to the top $j^*$ contestants for a suitable choice of $j^*$. (In comparable models with strategic effort choices, the optimal contest is either a winner-take-all contest or awards possibly unequal prizes, depending on the curvature of agents effort cost functions.) We next address the general case with heterogeneous costs where agents types are inherently two-dimensional, significantly complicating equilibrium analysis. Our main result here is that the winner-take-all contest is a 3-approximation of the optimal contest when the principals objective is to maximize the quality of the best elicited contribution.
We consider the problem of designing a survey to aggregate non-verifiable information from a privacy-sensitive population: an analyst wants to compute some aggregate statistic from the private bits held by each member of a population, but cannot veri fy the correctness of the bits reported by participants in his survey. Individuals in the population are strategic agents with a cost for privacy, ie, they not only account for the payments they expect to receive from the mechanism, but also their privacy costs from any information revealed about them by the mechanisms outcome---the computed statistic as well as the payments---to determine their utilities. How can the analyst design payments to obtain an accurate estimate of the population statistic when individuals strategically decide both whether to participate and whether to truthfully report their sensitive information? We design a differentially private peer-prediction mechanism that supports accurate estimation of the population statistic as a Bayes-Nash equilibrium in settings where agents have explicit preferences for privacy. The mechanism requires knowledge of the marginal prior distribution on bits $b_i$, but does not need full knowledge of the marginal distribution on the costs $c_i$, instead requiring only an approximate upper bound. Our mechanism guarantees $epsilon$-differential privacy to each agent $i$ against any adversary who can observe the statistical estimate output by the mechanism, as well as the payments made to the $n-1$ other agents $j eq i$. Finally, we show that with slightly more structured assumptions on the privacy cost functions of each agent, the cost of running the survey goes to $0$ as the number of agents diverges.
A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of {em differential privacy}, which requires that a mechanisms output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a {em geometric mechanism} $M^*$ -- a discrete variant of the simple and well-studied Laplace mechanism -- that is {em simultaneously expected loss-minimizing} for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: {em every} potential user $u$, no matter what its side information and preferences, derives as much utility from $M^*$ as from interacting with a differentially private mechanism $M_u$ that is optimally tailored to $u$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا