أصبح الكشف عن اللغة المسيئة أداة مهمة لزراعة منصات آمنة عبر الإنترنت.نحن نبحث في تفاعل جودة التوضيحية وأداء المصنف.نحن نستخدم مخطط توضيحي جديد وحبوس جديد يتيح لنا التمييز بين اللغة المسيئة والاستخدامات العامية للغالبية غير المقصود ضررا.تظهر نتائجنا ميلا من العمال الحشد للإفراط في استخدام الفئة المسيئة، مما يخلق توازن فئة غير واقعية ويؤثر على دقات التصنيف.نحقق أيضا في طرق مختلفة للتمييز بين الإساءة الصريحة والمنامية وإظهار النهج القائم على المعجم إما أكثر من الإيذاء أو التقدير نسبة الإساءة الصريحة في مجموعات البيانات.
Abusive language detection has become an important tool for the cultivation of safe online platforms. We investigate the interaction of annotation quality and classifier performance. We use a new, fine-grained annotation scheme that allows us to distinguish between abusive language and colloquial uses of profanity that are not meant to harm. Our results show a tendency of crowd workers to overuse the abusive class, which creates an unrealistic class balance and affects classification accuracy. We also investigate different methods of distinguishing between explicit and implicit abuse and show lexicon-based approaches either over- or under-estimate the proportion of explicit abuse in data sets.
References used
https://aclanthology.org/
The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models th
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social medi
Current abusive language detection systems have demonstrated unintended bias towards sensitive features such as nationality or gender. This is a crucial issue, which may harm minorities and underrepresented groups if such systems were integrated in r
We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we hav
As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manual