اكتشاف الكلام الكراهية هو مجال أبحاث بنشاط مع مجموعة متنوعة من الأساليب المقترحة مؤخرا التي سمحت بدفع النتائج الحديثة.واحدة من تحديات هذه الأساليب الآلية - وهي نماذج التعلم العميق الحديثة - خطر الإيجابيات الخاطئة (أي، اتهامات كاذبة)، والتي قد تؤدي إلى حجب أو إزالة محتوى وسائل التواصل الاجتماعي غير الضارة في التطبيقات مع تدخل المشرف القليلوبعدنحن نقيم نماذج التعلم العميق في حد سواء تحت ظروف الكشف عن الكلام في المجال والمجال عبر المجال، وإدخال نهج SVM يسمح بتحسين نتائج أحدث من الفن عند دمجها مع نماذج التعلم العميق من خلال غالبية بسيطةفرقة التصويت.يرجع التحسن أساسا إلى انخفاض معدل إيجابي كاذب.
Hate speech detection is an actively growing field of research with a variety of recently proposed approaches that allowed to push the state-of-the-art results. One of the challenges of such automated approaches -- namely recent deep learning models -- is a risk of false positives (i.e., false accusations), which may lead to over-blocking or removal of harmless social media content in applications with little moderator intervention. We evaluate deep learning models both under in-domain and cross-domain hate speech detection conditions, and introduce an SVM approach that allows to significantly improve the state-of-the-art results when combined with the deep learning models through a simple majority-voting ensemble. The improvement is mainly due to a reduction of the false positive rate.
References used
https://aclanthology.org/
We address the task of automatic hate speech detection for low-resource languages. Rather than collecting and annotating new hate speech data, we show how to use cross-lingual transfer learning to leverage already existing data from higher-resource l
Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may
In this paper, we describe experiments designed to evaluate the impact of stylometric and emotion-based features on hate speech detection: the task of classifying textual content into hate or non-hate speech classes. Our experiments are conducted for
We present a system for zero-shot cross-lingual offensive language and hate speech classification. The system was trained on English datasets and tested on a task of detecting hate speech and offensive social media content in a number of languages wi
We study the usefulness of hateful metaphorsas features for the identification of the type and target of hate speech in Dutch Facebook comments. For this purpose, all hateful metaphors in the Dutch LiLaH corpus were annotated and interpreted in line