نقدم HATEBERT، نموذج BERT الذي تم تدريبه على إعادة تدريب للكشف عن اللغة المسيئة باللغة الإنجليزية.تم تدريب النموذج على RAL-E، وهي مجموعة بيانات واسعة النطاق من تعليقات Reddit باللغة الإنجليزية من المجتمعات المحظورة لكونها مسيئة أو بغيضة حيث قمنا بإتاحتها للجمهور.نقدم نتائج مقارنة مفصلة بين نموذج اللغة المدرب مسبقا والنسخة المستقلة على ثلاث مجموعات بيانات باللغة الإنجليزية لمهام الهجومية والمسيئة ومهام الكشف عن الكلام.في جميع مجموعات البيانات، تتفوق HateBERT على نموذج بيرت العام.ونناقش أيضا مجموعة تجارب تقارن إمكانية نقل النماذج الصعبة في مجموعات البيانات، مما يشير إلى أن القدرة على التأثر بالتوافق مع الظواهر المشروحة.
We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.
References used
https://aclanthology.org/
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social medi
The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models th
The advancement of the web and information technology has contributed to the rapid growth of digital libraries and automatic machine translation tools which easily translate texts from one language into another. These have increased the content acces
Sarcasm detection is one of the top challenging tasks in text classification, particularly for informal Arabic with high syntactic and semantic ambiguity. We propose two systems that harness knowledge from multiple tasks to improve the performance of
Current abusive language detection systems have demonstrated unintended bias towards sensitive features such as nationality or gender. This is a crucial issue, which may harm minorities and underrepresented groups if such systems were integrated in r