ﻻ يوجد ملخص باللغة العربية
In the setting where we ask participants multiple similar possibly subjective multi-choice questions (e.g. Do you like Bulbasaur? Y/N; do you like Squirtle? Y/N), peer prediction aims to design mechanisms that encourage honest feedback without verification. A series of works have successfully designed multi-task peer prediction mechanisms where reporting truthfully is better than any other strategy (dominantly truthful), while they require an infinite number of tasks. A recent work proposes the first multi-task peer prediction mechanism, Determinant Mutual Information (DMI)-Mechanism, where not only is dominantly truthful but also works for a finite number of tasks (practical). However, few works consider how to optimize the multi-task peer prediction mechanisms. In addition to the definition of optimization goal, the biggest challenge is we do not have space for optimization since there is only a single practical and dominantly truthful mechanism. This work addresses this problem by proposing a tractable effort incentive optimization goal and generalizing DMI-Mechanism to a new family of practical, dominantly truthful mechanisms, Volume Mutual Information (VMI)-Mechanisms. We show that DMI-Mechanism may not be optimal. But we can construct a sequence of VMI-Mechanisms that are approximately optimal. The main technical tool is a novel family of mutual information measures, Volume Mutual Information, which generalizes Determinant Mutual Information. We construct VMI by a simple geometric idea: we measure how informative a distribution is by measuring the volume of distributions that is less informative than it (inappropriately, its similar to measuring how clever a person is by counting the number of people that are less clever than he/she).
Recent advances in multi-task peer prediction have greatly expanded our knowledge about the power of multi-task peer prediction mechanisms. Various mechanisms have been proposed in different settings to elicit different types of information. But we s
Peer prediction mechanisms incentivize agents to truthfully report their signals even in the absence of verification by comparing agents reports with those of their peers. In the detail-free multi-task setting, agents respond to multiple independent
We propose measurement integrity, a property related to ex post reward fairness, as a novel desideratum for peer prediction mechanisms in many applications, including peer assessment. We operationalize this notion to evaluate the measurement integrit
We consider the problem of purchasing data for machine learning or statistical estimation. The data analyst has a budget to purchase datasets from multiple data providers. She does not have any test data that can be used to evaluate the collected dat
In many settings, an effective way of evaluating objects of interest is to collect evaluations from dispersed individuals and to aggregate these evaluations together. Some examples are categorizing online content and evaluating student assignments vi