No Arabic abstract
Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases, leading to the proposal of fair ranking algorithms (e.g., Det-Greedy) which increase exposure of underrepresented candidates. However, there is little to no work that explores whether fair ranking algorithms actually improve real world outcomes (e.g., hiring decisions) for underrepresented groups. Furthermore, there is no clear understanding as to how other factors (e.g., job context, inherent biases of the employers) may impact the efficacy of fair ranking in practice. In this work, we analyze various sources of gender biases in online hiring platforms, including the job context and inherent biases of employers and establish how these factors interact with ranking algorithms to affect hiring decisions. To the best of our knowledge, this work makes the first attempt at studying the interplay between the aforementioned factors in the context of online hiring. We carry out a largescale user study simulating online hiring scenarios with data from TaskRabbit, a popular online freelancing site. Our results demonstrate that while fair ranking algorithms generally improve the selection rates of underrepresented minorities, their effectiveness relies heavily on the job contexts and candidate profiles.
Due to their promise of superior predictive power relative to human assessment, machine learning models are increasingly being used to support high-stakes decisions. However, the nature of the labels available for training these models often hampers the usefulness of predictive models for decision support. In this paper, we explore the use of historical expert decisions as a rich--yet imperfect--source of information, and we show that it can be leveraged to mitigate some of the limitations of learning from observed labels alone. We consider the problem of estimating expert consistency indirectly when each case in the data is assessed by a single expert, and propose influence functions based methodology as a solution to this problem. We then incorporate the estimated expert consistency into the predictive model meant for decision support through an approach we term label amalgamation. This allows the machine learning models to learn from experts in instances where there is expert consistency, and learn from the observed labels elsewhere. We show how the proposed approach can help mitigate common challenges of learning from observed labels alone, reducing the gap between the construct that the algorithm optimizes for and the construct of interest to experts. After providing intuition and theoretical results, we present empirical results in the context of child maltreatment hotline screenings. Here, we find that (1) there are high-risk cases whose risk is considered by the experts but not wholly captured in the target labels used to train a deployed model, and (2) the proposed approach improves recall for these cases.
Crowdsourcing systems aggregate decisions of many people to help users quickly identify high-quality options, such as the best answers to questions or interesting news stories. A long-standing issue in crowdsourcing is how option quality and human judgement heuristics interact to affect collective outcomes, such as the perceived popularity of options. We address this limitation by conducting a controlled experiment where subjects choose between two ranked options whose quality can be independently varied. We use this data to construct a model that quantifies how judgement heuristics and option quality combine when deciding between two options. The model reveals popularity-ranking can be unstable: unless the quality difference between the two options is sufficiently high, the higher quality option is not guaranteed to be eventually ranked on top. To rectify this instability, we create an algorithm that accounts for judgement heuristics to infer the best option and rank it first. This algorithm is guaranteed to be optimal if data matches the model. When the data does not match the model, however, simulations show that in practice this algorithm performs better or at least as well as popularity-based and recency-based ranking for any two-choice question. Our work suggests that algorithms relying on inference of mathematical models of user behavior can substantially improve outcomes in crowdsourcing systems.
Two simple and attractive mechanisms for the fair division of indivisible goods in an online setting are LIKE and BALANCED LIKE. We study some fundamental computational problems concerning the outcomes of these mechanisms. In particular, we consider what expected outcomes are possible, what outcomes are necessary, and how to compute their exact outcomes. In general, we show that such questions are more tractable to compute for LIKE than for BALANCED LIKE. As LIKE is strategy-proof but BALANCED LIKE is not, we also consider the computational problem of how, with BALANCED LIKE, an agent can compute a strategic bid to improve their outcome. We prove that this problem is intractable in general.
We conduct a study of hiring bias on a simulation platform where we ask Amazon MTurk participants to make hiring decisions for a mathematically intensive task. Our findings suggest hiring biases against Black workers and less attractive workers and preferences towards Asian workers female workers and more attractive workers. We also show that certain UI designs including provision of candidates information at the individual level and reducing the number of choices can significantly reduce discrimination. However provision of candidates information at the subgroup level can increase discrimination. The results have practical implications for designing better online freelance marketplaces.
Body measurements, including weight and height, are key indicators of health. Being able to visually assess body measurements reliably is a step towards increased awareness of overweight and obesity and is thus important for public health. Nevertheless it is currently not well understood how accurately humans can assess weight and height from images, and when and how they fail. To bridge this gap, we start from 1,682 images of persons collected from the Web, each annotated with the true weight and height, and ask crowd workers to estimate the weight and height for each image. We conduct a faceted analysis taking into account characteristics of the images as well as the crowd workers assessing the images, revealing several novel findings: (1) Even after aggregation, the crowds accuracy is overall low. (2) We find strong evidence of contraction bias toward a reference value, such that the weight (height) of light (short) people is overestimated, whereas that of heavy (tall) people is underestimated. (3) We estimate workers individual reference values using a Bayesian model, finding that reference values strongly correlate with workers own height and weight, indicating that workers are better at estimating people similar to themselves. (4) The weight of tall people is underestimated more than that of short people; yet, knowing the height decreases the weight error only mildly. (5) Accuracy is higher on images of females than of males, but female and male workers are no different in terms of accuracy. (6) Crowd workers improve over time if given feedback on previous guesses. Finally, we explore various bias correction models for improving the crowds accuracy, but find that this only leads to modest gains. Overall, this work provides important insights on biases in body measurement estimation as obesity related conditions are on the rise.