ترغب بنشر مسار تعليمي؟ اضغط هنا

Assessing Gender Bias in the Information Systems Field: An Analysis of the Impact on Citations

118   0   0.0 ( 0 )
 نشر من قبل Silvia Masiero
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Gender bias, a systemic and unfair difference in how men and women are treated in a given domain, is widely studied across different academic fields. Yet, there are barely any studies of the phenomenon in the field of academic information systems (IS), which is surprising especially in the light of the proliferation of such studies in the Science, Technology, Mathematics and Technology (STEM) disciplines. To assess potential gender bias in the IS field, this paper outlines a study to estimate the impact of scholarly citations that female IS academics accumulate vis-`a-vis their male colleagues. Drawing on a scientometric study of the 7,260 papers published in the most prestigious IS journals (known as the AIS Basket of Eight), our analysis aims to unveil potential bias in the accumulation of citations between genders in the field. We use panel regression to estimate the gendered citations accumulation in the field. By doing so we propose to contribute knowledge on a core dimension of gender bias in academia, which is, so far, almost completely unexplored in the IS field.



قيم البحث

اقرأ أيضاً

Various measures have been proposed to quantify human-like social biases in word embeddings. However, bias scores based on these measures can suffer from measurement error. One indication of measurement quality is reliability, concerning the extent t o which a measure produces consistent results. In this paper, we assess three types of reliability of word embedding gender bias measures, namely test-retest reliability, inter-rater consistency and internal consistency. Specifically, we investigate the consistency of bias scores across different choices of random seeds, scoring rules and words. Furthermore, we analyse the effects of various factors on these measures reliability scores. Our findings inform better design of word embedding gender bias measures. Moreover, we urge researchers to be more critical about the application of such measures.
We conduct a study of hiring bias on a simulation platform where we ask Amazon MTurk participants to make hiring decisions for a mathematically intensive task. Our findings suggest hiring biases against Black workers and less attractive workers and p references towards Asian workers female workers and more attractive workers. We also show that certain UI designs including provision of candidates information at the individual level and reducing the number of choices can significantly reduce discrimination. However provision of candidates information at the subgroup level can increase discrimination. The results have practical implications for designing better online freelance marketplaces.
Gender diversity in the tech sector is - not yet? - sufficient to create a balanced ratio of men and women. For many women, access to computer science is hampered by socialization-related, social, cultural and structural obstacles. The so-called impl icit gender bias has a great influence in this respect. The lack of contact in areas of computer science makes it difficult to develop or expand potential interests. Female role models as well as more transparency of the job description should help women to promote their - possible - interest in the job description. However, gender diversity can also be promoted and fostered through adapted measures by leaders.
Image captioning has made substantial progress with huge supporting image collections sourced from the web. However, recent studies have pointed out that captioning datasets, such as COCO, contain gender bias found in web corpora. As a result, learni ng models could heavily rely on the learned priors and image context for gender identification, leading to incorrect or even offensive errors. To encourage models to learn correct gender features, we reorganize the COCO dataset and present two new splits COCO-GB V1 and V2 datasets where the train and test sets have different gender-context joint distribution. Models relying on contextual cues will suffer from huge gender prediction errors on the anti-stereotypical test data. Benchmarking experiments reveal that most captioning models learn gender bias, leading to high gender prediction errors, especially for women. To alleviate the unwanted bias, we propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence. Experimental results validate that GAIC can significantly reduce gender prediction errors with a competitive caption quality. Our codes and the designed benchmark datasets are available at https://github.com/datamllab/Mitigating_Gender_Bias_In_Captioning_System.
The digital traces we leave behind when engaging with the modern world offer an interesting lens through which we study behavioral patterns as expression of gender. Although gender differentiation has been observed in a number of settings, the majori ty of studies focus on a single data stream in isolation. Here we use a dataset of high resolution data collected using mobile phones, as well as detailed questionnaires, to study gender differences in a large cohort. We consider mobility behavior and individual personality traits among a group of more than $800$ university students. We also investigate interactions among them expressed via person-to-person contacts, interactions on online social networks, and telecommunication. Thus, we are able to study the differences between male and female behavior captured through a multitude of channels for a single cohort. We find that while the two genders are similar in a number of aspects, there are robust deviations that include multiple facets of social interactions, suggesting the existence of inherent behavioral differences. Finally, we quantify how aspects of an individuals characteristics and social behavior reveals their gender by posing it as a classification problem. We ask: How well can we distinguish between male and female study participants based on behavior alone? Which behavioral features are most predictive?
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا