تعلم نماذج اللغة المدربة مسبقا تحيزات ضارة اجتماعيا من كورسا التدريب الخاصة بهم، وقد تكرر هذه التحيزات عند استخدامها للجيل.ندرس التحيزات الجنسانية المرتبطة بطل الرواية في القصص الناتجة النموذجية.قد يتم التعبير عن هذه التحيزات إما صراحة (لا تستطيع المرأة أن تجمع ") أو ضمنيا (على سبيل المثال طابع الذكور غير المرغوب فيه يرشدها إلى مساحة وقوف السيارات).نحن نركز على التحيزات الضمنية واستخدام محرك منطق المنطقي للكشف عنها.على وجه التحديد، نستنتج وتحليل دوافع بطل الرواية، والسمات، والدول الذهنية، والآثار على الآخرين.تتماشى نتائجنا المتعلقة بالتحيزات الضمنية مع العمل المسبق الذي درس تحيزات صريحة، على سبيل المثال إظهار أن تصوير الأحرف الإناث يتركز حول المظهر، بينما تركز أرقام الذكور على الفكر.
Pre-trained language models learn socially harmful biases from their training corpora, and may repeat these biases when used for generation. We study gender biases associated with the protagonist in model-generated stories. Such biases may be expressed either explicitly (women can't park'') or implicitly (e.g. an unsolicited male character guides her into a parking space). We focus on implicit biases, and use a commonsense reasoning engine to uncover them. Specifically, we infer and analyze the protagonist's motivations, attributes, mental states, and implications on others. Our findings regarding implicit biases are in line with prior work that studied explicit biases, for example showing that female characters' portrayal is centered around appearance, while male figures' focus on intellect.
References used
https://aclanthology.org/
With language models being deployed increasingly in the real world, it is essential to address the issue of the fairness of their outputs. The word embedding representations of these language models often implicitly draw unwanted associations that fo
Gender inequality represents a considerable loss of human potential and perpetuates a culture of violence, higher gender wage gaps, and a lack of representation of women in higher and leadership positions. Applications powered by Artificial Intellige
Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. Gender bias in NLP has been well studied in English, but has been less studied in oth
Potential gender biases existing in Wikipedia's content can contribute to biased behaviors in a variety of downstream NLP systems. Yet, efforts in understanding what inequalities in portraying women and men occur in Wikipedia focused so far only on *
Internet search affects people's cognition of the world, so mitigating biases in search results and learning fair models is imperative for social good. We study a unique gender bias in image search in this work: the search images are often gender-imb