Do you want to publish a course? Click here

User Acceptance of Gender Stereotypes in Automated Career Recommendations

82   0   0.0 ( 0 )
 Added by Clarice Wang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g. along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we show that a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using career recommendation as a case study, we build a fair AI career recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, our results demonstrate we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans.



rate research

Read More

Designing an effective loss function plays a crucial role in training deep recommender systems. Most existing works often leverage a predefined and fixed loss function that could lead to suboptimal recommendation quality and training efficiency. Some recent efforts rely on exhaustively or manually searched weights to fuse a group of candidate loss functions, which is exceptionally costly in computation and time. They also neglect the various convergence behaviors of different data examples. In this work, we propose an AutoLoss framework that can automatically and adaptively search for the appropriate loss function from a set of candidates. To be specific, we develop a novel controller network, which can dynamically adjust the loss probabilities in a differentiable manner. Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors. Such design improves the models generalizability and transferability between deep recommender systems and datasets. We evaluate the proposed framework on two benchmark datasets. The results show that AutoLoss outperforms representative baselines. Further experiments have been conducted to deepen our understandings of AutoLoss, including its transferability, components and training efficiency.
Academic fields exhibit substantial levels of gender segregation. To date, most attempts to explain this persistent global phenomenon have relied on limited cross-sections of data from specific countries, fields, or career stages. Here we used a global longitudinal dataset assembled from profiles on ORCID.org to investigate which characteristics of a field predict gender differences among the academics who leave and join that field. Only two field characteristics consistently predicted such differences: (1) the extent to which a field values raw intellectual talent (brilliance) and (2) whether a field is in Science, Technology, Engineering, and Mathematics (STEM). Women more than men moved away from brilliance-oriented and STEM fields, and men more than women moved toward these fields. Our findings suggest that stereotypes associating brilliance and other STEM-relevant traits with men more than women play a key role in maintaining gender segregation across academia.
Our analysis of thousands of movies and books reveals how these cultural products weave stereotypical gender roles into morality tales and perpetuate gender inequality through storytelling. Using the word embedding techniques, we reveal the constructed emotional dependency of female characters on male characters in stories.
248 - Timnit Gebru 2019
From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have serious consequences in peoples lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy Man is to computer programmer as woman is to X by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.
59 - Daniel A. Perley 2019
I analyze the postdoctoral career tracks of a nearly-complete sample of astronomers from 28 United States graduate astronomy and astrophysics programs spanning 13 graduating years (N=1063). A majority of both men and women (65% and 66%, respectively) find long-term employment in astronomy or closely-related academic disciplines. No significant difference is observed in the rates at which men and women are hired into these jobs following their PhDs, or in the rates at which they leave the field. Applying a two-outcome survival analysis model to the entire data set, the relative academic hiring probability ratio for women vs. men at a common year post-PhD is H_(F/M) = 1.08 (+0.20, -0.17; 95% CI); the relative leaving probability ratio is L_(F/M) = 1.03 (+0.31, -0.24). These are both consistent with equal outcomes for both genders (H_(F/M) = L_(F/M) = 1) and rule out more than minor gender differences in hiring or in the decision to abandon an academic career. They suggest that despite discrimination and adversity, women scientists are successful at managing the transition between PhD, postdoctoral, and faculty/staff positions.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا