سهلت تقنية التكنولوجيا (MT) عن مهامنا اليومية من خلال توفير اختصارات يمكن الوصول إليها لجمع المعلومات والمعالجة والتواصل.ومع ذلك، يمكن أن تعاني من التحيزات التي تضر المستخدمين والمجتمع ككل.كحافظ جديد نسبيا للاستفسار، لا تزال دراسات التحيز بين الجنسين في MT تفتقر إلى التماسك.هذا المدافع عن إطار موحد لتخفيف البحوث المستقبلية.تحقيقا لهذه الغاية، نحن: 1) مراجعة النظرة بشكل خطير المفاهيم الحالية للتحيز في ضوء الأفكار النظرية من التخصصات ذات الصلة، 2) تلخص التحليلات السابقة التي تهدف إلى تقييم التحيز بين الجنسين في MT، III) مناقشة استراتيجيات التخفيف المقترحة حتى الآن، والرابع)نحو الاتجاهات المحتملة للعمل في المستقبل.
AbstractMachine translation (MT) technology has facilitated our daily tasks by providing accessible shortcuts for gathering, processing, and communicating information. However, it can suffer from biases that harm users and society at large. As a relatively new field of inquiry, studies of gender bias in MT still lack cohesion. This advocates for a unified framework to ease future research. To this end, we: i) critically review current conceptualizations of bias in light of theoretical insights from related disciplines, ii) summarize previous analyses aimed at assessing gender bias in MT, iii) discuss the mitigating strategies proposed so far, and iv) point toward potential directions for future work.
References used
https://aclanthology.org/
With language models being deployed increasingly in the real world, it is essential to address the issue of the fairness of their outputs. The word embedding representations of these language models often implicitly draw unwanted associations that fo
As Machine Translation (MT) has become increasingly more powerful, accessible, and widespread, the potential for the perpetuation of bias has grown alongside its advances. While overt indicators of bias have been studied in machine translation, we ar
Recent works have found evidence of gender bias in models of machine translation and coreference resolution using mostly synthetic diagnostic datasets. While these quantify bias in a controlled experiment, they often do so on a small scale and consis
Targeted evaluations have found that machine translation systems often output incorrect gender in translations, even when the gender is clear from context. Furthermore, these incorrectly gendered translations have the potential to reflect or amplify
Many NLP models operate over sequences of subword tokens produced by hand-crafted tokenization rules and heuristic subword induction algorithms. A simple universal alternative is to represent every computerized text as a sequence of bytes via UTF-8,