ﻻ يوجد ملخص باللغة العربية
There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups fairly. However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.
We consider the model of the data broker selling information to a single agent to maximize his revenue. The agent has private valuation for the additional information, and upon receiving the signal from the data broker, the agent can conduct her own
A firm has a group of workers, each of whom has varying productivities over a set of tasks. After assigning workers to tasks, the firm must then decide how to distribute its output to the workers. We first consider three compensation rules and variou
We consider transferable-utility profit-sharing games that arise from settings in which agents need to jointly choose one of several alternatives, and may use transfers to redistribute the welfare generated by the chosen alternative. One such setting
We extend the fair machine learning literature by considering the problem of proportional centroid clustering in a metric context. For clustering $n$ points with $k$ centers, we define fairness as proportionality to mean that any $n/k$ points are ent
Optimal mechanism design enjoys a beautiful and well-developed theory, and also a number of killer applications. Rules of thumb produced by the field influence everything from how governments sell wireless spectrum licenses to how the major search en