تبلغ نماذج الكشف عن اللغة المسيئة للحكومة الأمريكية أداء كبير في Corpus، ولكن أداء الفضل عند تقييم التعليقات المسيئة التي تختلف عن سيناريو التدريب.نظرا لأن الشروح البشرية ينطوي على وقت وجهد كبير، فإن النماذج التي يمكن أن تتكيف مع التعليقات التي تم جمعها حديثا يمكن أن تكون مفيدة.في هذه الورقة، نحقق في فعالية العديد من نهج تكيف النطاقات غير المدمرة (UDA) لمهمة الكشف عن اللغة المسيئة.بالمقارنة، نقوم بتكييف متغير نموذج BERT، تدربت على تعليقات مسيئة واسعة النطاق، باستخدام طراز لغة ملثم (MLM)يوضح تقييمنا أن نهج UDA تؤدي إلى أداء فرعي الأمثل، في حين أن ضبط الريامة الجميلة لا يتحسن في إعداد العرض.يكشف التحليل المفصل عن حدود نهج UDA ويؤكد على الحاجة إلى بناء طرق تكيف فعالة لهذه المهمة.
The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models that can adapt to newly collected comments can prove to be useful. In this paper, we investigate the effectiveness of several Unsupervised Domain Adaptation (UDA) approaches for the task of cross-corpora abusive language detection. In comparison, we adapt a variant of the BERT model, trained on large-scale abusive comments, using Masked Language Model (MLM) fine-tuning. Our evaluation shows that the UDA approaches result in sub-optimal performance, while the MLM fine-tuning does better in the cross-corpora setting. Detailed analysis reveals the limitations of the UDA approaches and emphasizes the need to build efficient adaptation methods for this task.
References used
https://aclanthology.org/
Rapidly changing social media content calls for robust and generalisable abuse detection models. However, the state-of-the-art supervised models display degraded performance when they are evaluated on abusive comments that differ from the training co
This paper considers the unsupervised domain adaptation problem for neural machine translation (NMT), where we assume the access to only monolingual text in either the source or target language in the new domain. We propose a cross-lingual data selec
Abusive language detection has become an important tool for the cultivation of safe online platforms. We investigate the interaction of annotation quality and classifier performance. We use a new, fine-grained annotation scheme that allows us to dist
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social medi
Current abusive language detection systems have demonstrated unintended bias towards sensitive features such as nationality or gender. This is a crucial issue, which may harm minorities and underrepresented groups if such systems were integrated in r