مع نشر نماذج اللغة بشكل متزايد في العالم الحقيقي، من الضروري معالجة مسألة نزاهة مخرجاتها. غالبا ما تعتمد كلمة تضمين تمثيلات نماذج اللغة هذه ضمنيا ارتباطات غير مرغوب فيها تشكل تحيزا اجتماعيا داخل النموذج. تطرح طبيعة اللغات بين الجنسين مثل الهندية مشكلة إضافية في تقدير التحيز والتخفيف من التحيز، بسبب التغيير في شكل الكلمات في الجملة، بناء على جنس الموضوع. بالإضافة إلى ذلك، هناك أعمال متناثرة تتم في مجال أنظمة القياس والدولي لغات Instan. في عملنا، نحاول تقييم وتحديد التحيز بين الجنسين داخل نظام الترجمة الآلية الهندية-الإنجليزية. نقوم بتنفيذ إصدار تعديل من متري TGBI الموجود على أساس الاعتبارات النحوية له الهندية. قارننا أيضا وتتناقض مع قياسات التحيز الناتجة عن مقاييس متعددة للمظلات المدربة مسبقا وتلك التي تعلمتها نموذج الترجمة الآلي لدينا.
With language models being deployed increasingly in the real world, it is essential to address the issue of the fairness of their outputs. The word embedding representations of these language models often implicitly draw unwanted associations that form a social bias within the model. The nature of gendered languages like Hindi, poses an additional problem to the quantification and mitigation of bias, owing to the change in the form of the words in the sentence, based on the gender of the subject. Additionally, there is sparse work done in the realm of measuring and debiasing systems for Indic languages. In our work, we attempt to evaluate and quantify the gender bias within a Hindi-English machine translation system. We implement a modified version of the existing TGBI metric based on the grammatical considerations for Hindi. We also compare and contrast the resulting bias measurements across multiple metrics for pre-trained embeddings and the ones learned by our machine translation model.
References used
https://aclanthology.org/
Machine translation performs automatic translation from one natural language to another. Neural machine translation attains a state-of-the-art approach in machine translation, but it requires adequate training data, which is a severe problem for low-
As Machine Translation (MT) has become increasingly more powerful, accessible, and widespread, the potential for the perpetuation of bias has grown alongside its advances. While overt indicators of bias have been studied in machine translation, we ar
Potential gender biases existing in Wikipedia's content can contribute to biased behaviors in a variety of downstream NLP systems. Yet, efforts in understanding what inequalities in portraying women and men occur in Wikipedia focused so far only on *
Incorporating multiple input modalities in a machine translation (MT) system is gaining popularity among MT researchers. Unlike the publicly available dataset for Multimodal Machine Translation (MMT) tasks, where the captions are short image descript
Gender inequality represents a considerable loss of human potential and perpetuates a culture of violence, higher gender wage gaps, and a lack of representation of women in higher and leadership positions. Applications powered by Artificial Intellige