في هذه الورقة، ندرس التحيز العرقي وكيف يختلف عبر اللغات عن طريق تحليل التحيز الإثني والتخفيف من التحيز الإثني في بيرت أحادي اللغة الإنجليزية والألمانية والإسبانية والكورية والتركية والصينية.لاحظ وتحديد التحيز العرقي، ونحن نطور مترا مربعا يسمى درجة التحيز الفئرانية.ثم نقترح طريقتين للتخفيف؛أولا باستخدام نموذج متعدد اللغات، والثاني باستخدام محاذاة الكلمات السياقية من نماذج أحادية.قارن أساليبنا المقترحة مع بيرت أحادي الأجل وإظهار أن هذه الأساليب تخفف بشكل فعال التحيز العرقي.أي من الطريقتين يعملان بشكل أفضل يعتمد على مقدار موارد NLP المتاحة لهذه اللغة.نحن بالإضافة إلى تجربة اللغة العربية واليونانية للتحقق من أن أساليبنا المقترحة تعمل من أجل مجموعة متنوعة واسعة من اللغات.
In this paper, we study ethnic bias and how it varies across languages by analyzing and mitigating ethnic bias in monolingual BERT for English, German, Spanish, Korean, Turkish, and Chinese. To observe and quantify ethnic bias, we develop a novel metric called Categorical Bias score. Then we propose two methods for mitigation; first using a multilingual model, and second using contextual word alignment of two monolingual models. We compare our proposed methods with monolingual BERT and show that these methods effectively alleviate the ethnic bias. Which of the two methods works better depends on the amount of NLP resources available for that language. We additionally experiment with Arabic and Greek to verify that our proposed methods work for a wider variety of languages.
References used
https://aclanthology.org/
Fake news causes significant damage to society. To deal with these fake news, several studies on building detection models and arranging datasets have been conducted. Most of the fake news datasets depend on a specific time period. Consequently, the
Nowadays, social media platforms use classification models to cope with hate speech and abusive language. The problem of these models is their vulnerability to bias. A prevalent form of bias in hate speech and abusive language datasets is annotator b
This work revisits the task of detecting decision-related utterances in multi-party dialogue. We explore performance of a traditional approach and a deep learning-based approach based on transformer language models, with the latter providing modest i
Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. Gender bias in NLP has been well studied in English, but has been less studied in oth
Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution. Previous works focus on detecting these biases, reducing bias in data representa