No Arabic abstract
A firm has a group of workers, each of whom has varying productivities over a set of tasks. After assigning workers to tasks, the firm must then decide how to distribute its output to the workers. We first consider three compensation rules and various fairness properties they may satisfy. We show that among efficient and symmetric rules: the Egalitarian rule is the only rule that is invariant to ``irrelevant changes in one workers productivity; the Individual Contribution rule is the only rule that is invariant to the removal of workers and their assigned tasks; and the Shapley Value rule is the only rule that, for any two workers, equalizes the impact one worker has on the other workers compensation. We introduce other rules and axioms, and relate each rule to each axiom.
There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups fairly. However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.
We consider transferable-utility profit-sharing games that arise from settings in which agents need to jointly choose one of several alternatives, and may use transfers to redistribute the welfare generated by the chosen alternative. One such setting is the Shared-Rental problem, in which students jointly rent an apartment and need to decide which bedroom to allocate to each student, depending on the students preferences. Many solution concepts have been proposed for such settings, ranging from mechanisms without transfers, such as Random Priority and the Eating mechanism, to mechanisms with transfers, such as envy free solutions, the Shapley value, and the Kalai-Smorodinsky bargaining solution. We seek a solution concept that satisfies three natural properties, concerning efficiency, fairness and decomposition. We observe that every solution concept known (to us) fails to satisfy at least one of the three properties. We present a new solution concept, designed so as to satisfy the three properties. A certain submodularity condition (which holds in interesting special cases such as the Shared-Rental setting) implies both existence and uniqueness of our solution concept.
In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning. Our approach optimizes a multi-objective loss function in which (a) learns a fair representation by suppressing protected attributes (b) maintains the information content by minimizing a reconstruction loss and (c) allows for solving a classification task in a fair manner by minimizing the classification error and respecting the equalized odds-based fairness regularized. Our experiments on a variety of datasets demonstrate that such a joint approach is superior to separate treatment of unfairness in representation learning or supervised learning. Additionally, our regularizers can be adaptively weighted to balance the different components of the loss function, thus allowing for a very general framework for conjoint fair representation learning and decision making.
We extend the fair machine learning literature by considering the problem of proportional centroid clustering in a metric context. For clustering $n$ points with $k$ centers, we define fairness as proportionality to mean that any $n/k$ points are entitled to form their own cluster if there is another center that is closer in distance for all $n/k$ points. We seek clustering solutions to which there are no such justified complaints from any subsets of agents, without assuming any a priori notion of protected subsets. We present and analyze algorithms to efficiently compute, optimize, and audit proportional solutions. We conclude with an empirical examination of the tradeoff between proportional solutions and the $k$-means objective.
We propose a general variational framework of fair clustering, which integrates an original Kullback-Leibler (KL) fairness term with a large class of clustering objectives, including prototype or graph based. Fundamentally different from the existing combinatorial and spectral solutions, our variational multi-term approach enables to control the trade-off levels between the fairness and clustering objectives. We derive a general tight upper bound based on a concave-convex decomposition of our fairness term, its Lipschitz-gradient property and the Pinskers inequality. Our tight upper bound can be jointly optimized with various clustering objectives, while yielding a scalable solution, with convergence guarantee. Interestingly, at each iteration, it performs an independent update for each assignment variable. Therefore, it can be easily distributed for large-scale datasets. This scalability is important as it enables to explore different trade-off levels between the fairness and clustering objectives. Unlike spectral relaxation, our formulation does not require computing its eigenvalue decomposition. We report comprehensive evaluations and comparisons with state-of-the-art methods over various fair-clustering benchmarks, which show that our variational formulation can yield highly competitive solutions in terms of fairness and clustering objectives.