نحن ندرس فائدة ميزات المعدات الباردة لتحديد نوع وهدف خطاب الكراهية في تعليقات Facebook الهولندية.لهذا الغرض، تم تفاح جميع الاستعارات البغيضة في كوربوس الهولندية Lilah Corpus وتفسيرها بما يتماشى مع نظرية الاستعارة المفاهيمية وتحليل الاستعارة الحرج.نحن نقدم نتائج SVM وبرت / روبرتا، والتحقيق في تأثير طرق ترميز معلومات الاستعارة المختلفة على نوع خطاب الكراهية ودقة الكشف المستهدف.تظهر نتائج التجارب التي أجريت أن ميزات الاستعارة البغيضة تحسين الأداء النموذجي لهذه المهام.لمعرفتنا، هذه هي المرة الأولى التي يتم فيها التحقق من فعالية الاستعارات البغيضة كمصدر معلومات لتصنيف Hatespeech.
We study the usefulness of hateful metaphorsas features for the identification of the type and target of hate speech in Dutch Facebook comments. For this purpose, all hateful metaphors in the Dutch LiLaH corpus were annotated and interpreted in line with Conceptual Metaphor Theory and Critical Metaphor Analysis. We provide SVM and BERT/RoBERTa results, and investigate the effect of different metaphor information encoding methods on hate speech type and target detection accuracy. The results of the conducted experiments show that hateful metaphor features improve model performance for the both tasks. To our knowledge, it is the first time that the effectiveness of hateful metaphors as an information source for hatespeech classification is investigated.
References used
https://aclanthology.org/
Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may
Hate speech has grown significantly on social media, causing serious consequences for victims of all demographics. Despite much attention being paid to characterize and detect discriminatory speech, most work has focused on explicit or overt hate spe
Hate speech detection is an actively growing field of research with a variety of recently proposed approaches that allowed to push the state-of-the-art results. One of the challenges of such automated approaches -- namely recent deep learning models
We present new results for the problem of sequence metaphor labeling, using the recently developed Visibility Embeddings. We show that concatenating such embeddings to the input of a BiLSTM obtains consistent and significant improvements at almost no cost, and we present further improved results when visibility embeddings are combined with BERT.
In this paper, we describe experiments designed to evaluate the impact of stylometric and emotion-based features on hate speech detection: the task of classifying textual content into hate or non-hate speech classes. Our experiments are conducted for