Do you want to publish a course? Click here

Fair Prediction with Endogenous Behavior

101   0   0.0 ( 0 )
 Added by Aaron Roth
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups fairly. However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.



rate research

Read More

64 - Yingkai Li 2021
We consider the model of the data broker selling information to a single agent to maximize his revenue. The agent has private valuation for the additional information, and upon receiving the signal from the data broker, the agent can conduct her own experiment to refine her posterior belief on the states with additional costs. In this paper, we show that in the optimal mechanism, the agent has no incentive to acquire any additional costly information under equilibrium. Still, the ability to acquire additional information distorts the incentives of the agent, and reduces the optimal revenue of the data broker. In addition, we show that under the separable valuation assumption, there is no distortion at the top, and posting a deterministic price for fully revealing the states is optimal when the prior distribution is sufficiently informative or the cost of acquiring additional information is sufficiently high, and is approximately optimal when the type distribution satisfies the monotone hazard rate condition.
76 - John E. Stovall 2021
A firm has a group of workers, each of whom has varying productivities over a set of tasks. After assigning workers to tasks, the firm must then decide how to distribute its output to the workers. We first consider three compensation rules and various fairness properties they may satisfy. We show that among efficient and symmetric rules: the Egalitarian rule is the only rule that is invariant to ``irrelevant changes in one workers productivity; the Individual Contribution rule is the only rule that is invariant to the removal of workers and their assigned tasks; and the Shapley Value rule is the only rule that, for any two workers, equalizes the impact one worker has on the other workers compensation. We introduce other rules and axioms, and relate each rule to each axiom.
We consider transferable-utility profit-sharing games that arise from settings in which agents need to jointly choose one of several alternatives, and may use transfers to redistribute the welfare generated by the chosen alternative. One such setting is the Shared-Rental problem, in which students jointly rent an apartment and need to decide which bedroom to allocate to each student, depending on the students preferences. Many solution concepts have been proposed for such settings, ranging from mechanisms without transfers, such as Random Priority and the Eating mechanism, to mechanisms with transfers, such as envy free solutions, the Shapley value, and the Kalai-Smorodinsky bargaining solution. We seek a solution concept that satisfies three natural properties, concerning efficiency, fairness and decomposition. We observe that every solution concept known (to us) fails to satisfy at least one of the three properties. We present a new solution concept, designed so as to satisfy the three properties. A certain submodularity condition (which holds in interesting special cases such as the Shared-Rental setting) implies both existence and uniqueness of our solution concept.
We extend the fair machine learning literature by considering the problem of proportional centroid clustering in a metric context. For clustering $n$ points with $k$ centers, we define fairness as proportionality to mean that any $n/k$ points are entitled to form their own cluster if there is another center that is closer in distance for all $n/k$ points. We seek clustering solutions to which there are no such justified complaints from any subsets of agents, without assuming any a priori notion of protected subsets. We present and analyze algorithms to efficiently compute, optimize, and audit proportional solutions. We conclude with an empirical examination of the tradeoff between proportional solutions and the $k$-means objective.
Optimal mechanism design enjoys a beautiful and well-developed theory, and also a number of killer applications. Rules of thumb produced by the field influence everything from how governments sell wireless spectrum licenses to how the major search engines auction off online advertising. There are, however, some basic problems for which the traditional optimal mechanism design approach is ill-suited---either because it makes overly strong assumptions, or because it advocates overly complex designs. This survey reviews several common issues with optimal mechanisms, including exorbitant communication, computation, and informational requirements; and it presents several examples demonstrating that passing to the relaxed goal of an approximately optimal mechanism allows us to reason about fundamental questions that seem out of reach of the traditional theory.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا