كما تصبح لغة غير مقبولة اجتماعيا منتشرة في منصات وسائل التواصل الاجتماعي، أصبحت الحاجة إلى اعتدال المحتوى التلقائي أكثر إلحاحا.تقدم هذه المساهمة كوربوس اللغة المسيئة الهولندية (DALC V1.0)، وهي مجموعة بيانات جديدة مع تغريدات يدويا للغة المسيئة.إن مزين بالموارد تلبيس فجوة في موارد اللغة الهولندية ويعتمد مخطط توضيحي متعدد الطبقات النمذجة صريحا وهدف الرسائل المسيئة.تم إجراء تجارب الأساس في جميع طبقات التوضيحية، وتحقيق درجة ماكرو F1 من 0.748 للتصنيف الثنائي لطبقة صريحة و .489 للتصنيف المستهدف.
As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manually an- notated for abusive language. The resource ad- dress a gap in language resources for Dutch and adopts a multi-layer annotation scheme modeling the explicitness and the target of the abusive messages. Baselines experiments on all annotation layers have been conducted, achieving a macro F1 score of 0.748 for binary classification of the explicitness layer and .489 for target classification.
References used
https://aclanthology.org/
The use of attention mechanisms in deep learning approaches has become popular in natural language processing due to its outstanding performance. The use of these mechanisms allows one managing the importance of the elements of a sequence in accordan
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social medi
Nowadays, social media platforms use classification models to cope with hate speech and abusive language. The problem of these models is their vulnerability to bias. A prevalent form of bias in hate speech and abusive language datasets is annotator b
The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models th
We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we hav