We conduct a study of hiring bias on a simulation platform where we ask Amazon MTurk participants to make hiring decisions for a mathematically intensive task. Our findings suggest hiring biases against Black workers and less attractive workers and preferences towards Asian workers female workers and more attractive workers. We also show that certain UI designs including provision of candidates information at the individual level and reducing the number of choices can significantly reduce discrimination. However provision of candidates information at the subgroup level can increase discrimination. The results have practical implications for designing better online freelance marketplaces.
From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have
serious consequences in peoples lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy Man is to computer programmer as woman is to X by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.
Programming is a valuable skill in the labor market, making the underrepresentation of women in computing an increasingly important issue. Online question and answer platforms serve a dual purpose in this field: they form a body of knowledge useful a
s a reference and learning tool, and they provide opportunities for individuals to demonstrate credible, verifiable expertise. Issues, such as male-oriented site design or overrepresentation of men among the sites elite may therefore compound the issue of womens underrepresentation in IT. In this paper we audit the differences in behavior and outcomes between men and women on Stack Overflow, the most popular of these Q&A sites. We observe significant differences in how men and women participate in the platform and how successful they are. For example, the average woman has roughly half of the reputation points, the primary measure of success on the site, of the average man. Using an Oaxaca-Blinder decomposition, an econometric technique commonly applied to analyze differences in wages between groups, we find that most of the gap in success between men and women can be explained by differences in their activity on the site and differences in how these activities are rewarded. Specifically, 1) men give more answers than women and 2) are rewarded more for their answers on average, even when controlling for possible confounders such as tenure or buy-in to the site. Women ask more questions and gain more reward per question. We conclude with a hypothetical redesign of the sites scoring system based on these behavioral differences, cutting the reputation gap in half.
The digital traces we leave behind when engaging with the modern world offer an interesting lens through which we study behavioral patterns as expression of gender. Although gender differentiation has been observed in a number of settings, the majori
ty of studies focus on a single data stream in isolation. Here we use a dataset of high resolution data collected using mobile phones, as well as detailed questionnaires, to study gender differences in a large cohort. We consider mobility behavior and individual personality traits among a group of more than $800$ university students. We also investigate interactions among them expressed via person-to-person contacts, interactions on online social networks, and telecommunication. Thus, we are able to study the differences between male and female behavior captured through a multitude of channels for a single cohort. We find that while the two genders are similar in a number of aspects, there are robust deviations that include multiple facets of social interactions, suggesting the existence of inherent behavioral differences. Finally, we quantify how aspects of an individuals characteristics and social behavior reveals their gender by posing it as a classification problem. We ask: How well can we distinguish between male and female study participants based on behavior alone? Which behavioral features are most predictive?
Social media, the modern marketplace of ideas, is vulnerable to manipulation. Deceptive inauthentic actors impersonate humans to amplify misinformation and influence public opinions. Little is known about the large-scale consequences of such operatio
ns, due to the ethical challenges posed by online experiments that manipulate human behavior. Here we introduce a model of information spreading where agents prefer quality information but have limited attention. We evaluate the impact of manipulation strategies aimed at degrading the overall quality of the information ecosystem. The model reproduces empirical patterns about amplification of low-quality information. We find that infiltrating a critical fraction of the network is more damaging than generating attention-grabbing content or targeting influentials. We discuss countermeasures suggested by these insights to increase the resilience of social media users to manipulation, and legal issues arising from regulations aimed at protecting human speech from suppression by inauthentic actors.
Gender bias, a systemic and unfair difference in how men and women are treated in a given domain, is widely studied across different academic fields. Yet, there are barely any studies of the phenomenon in the field of academic information systems (IS
), which is surprising especially in the light of the proliferation of such studies in the Science, Technology, Mathematics and Technology (STEM) disciplines. To assess potential gender bias in the IS field, this paper outlines a study to estimate the impact of scholarly citations that female IS academics accumulate vis-`a-vis their male colleagues. Drawing on a scientometric study of the 7,260 papers published in the most prestigious IS journals (known as the AIS Basket of Eight), our analysis aims to unveil potential bias in the accumulation of citations between genders in the field. We use panel regression to estimate the gendered citations accumulation in the field. By doing so we propose to contribute knowledge on a core dimension of gender bias in academia, which is, so far, almost completely unexplored in the IS field.
Weiwen Leung
,Zheng Zhang
,Daviti Jibuti
.
(2020)
.
"Race, Gender and Beauty: The Effect of Information Provision on Online Hiring Biases"
.
Lionel Robert
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا