في الوقت الحاضر، تستخدم منصات وسائل التواصل الاجتماعي نماذج التصنيف للتعامل مع خطاب الكراهية واللغة المسيئة.مشكلة هذه النماذج هي ضعفها للحيز.شكل منتشر من التحيز في خطاب الكراهية ومجموعات البيانات اللغوية المسيئة هو التحيز الهندي الناجم عن التصور النفسي للتعليق وتعقيد مهمة الشرح.في ورقتنا، نقوم بتطوير مجموعة من الأساليب لقياس التحيز العنافي في مجموعات البيانات اللغوية المسيئة وتحديد وجهات نظر مختلفة باللغة المسيئة.نحن نطبق هذه الأساليب إلى أربع مجموعات بيانات مختلفة للغة المسيئة.يدعم نهجنا المقترح عمليات التوضيحية لهذه مجموعات البيانات والبحوث المستقبلية التي تتناول وجهات نظر مختلفة حول تصور اللغة المسيئة.
Nowadays, social media platforms use classification models to cope with hate speech and abusive language. The problem of these models is their vulnerability to bias. A prevalent form of bias in hate speech and abusive language datasets is annotator bias caused by the annotator's subjective perception and the complexity of the annotation task. In our paper, we develop a set of methods to measure annotator bias in abusive language datasets and to identify different perspectives on abusive language. We apply these methods to four different abusive language datasets. Our proposed approach supports annotation processes of such datasets and future research addressing different perspectives on the perception of abusive language.
References used
https://aclanthology.org/
As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manual
The use of attention mechanisms in deep learning approaches has become popular in natural language processing due to its outstanding performance. The use of these mechanisms allows one managing the importance of the elements of a sequence in accordan
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social medi
The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models th
A common practice in building NLP datasets, especially using crowd-sourced annotations, involves obtaining multiple annotator judgements on the same data instances, which are then flattened to produce a single ground truth'' label or score, through m