نما خطاب الكراهية بشكل كبير على وسائل التواصل الاجتماعي، مما تسبب في عواقب وخيمة على ضحايا جميع التركيبة السكانية.على الرغم من الاهتمام بالكثير من الاهتمام لتوصيف واكتشاف الكلام التمييزي، ركز معظم الأعمال على خطاب الكراهية الصريح أو الصريح، وفشل في معالجة شكل أكثر انتشارا يستند إلى لغة مشفرة أو غير مباشرة.لملء هذه الفجوة، يقدم هذا العمل تصنيفا مبررا من الناحية النظرية لخطاب الكراهية الضمنية والجورتين القياسي مع ملصقات جيدة المحبوب لكل رسالة وتضليلها.نقدم تحليلات منهجية لمجموعة البيانات الخاصة بنا باستخدام خطوط الأساس المعاصرة للكشف عن خطاب الكراهية الضمني، ونناقش الميزات الرئيسية التي تحدي النماذج الحالية.ستستمر هذه البيانات في العمل بمثابة معيار مفيد لفهم هذه المشكلة متعددة الأوجه.
Hate speech has grown significantly on social media, causing serious consequences for victims of all demographics. Despite much attention being paid to characterize and detect discriminatory speech, most work has focused on explicit or overt hate speech, failing to address a more pervasive form based on coded or indirect language. To fill this gap, this work introduces a theoretically-justified taxonomy of implicit hate speech and a benchmark corpus with fine-grained labels for each message and its implication. We present systematic analyses of our dataset using contemporary baselines to detect and explain implicit hate speech, and we discuss key features that challenge existing models. This dataset will continue to serve as a useful benchmark for understanding this multifaceted issue.
References used
https://aclanthology.org/
We study the usefulness of hateful metaphorsas features for the identification of the type and target of hate speech in Dutch Facebook comments. For this purpose, all hateful metaphors in the Dutch LiLaH corpus were annotated and interpreted in line
Bias mitigation approaches reduce models' dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may
We address the task of automatic hate speech detection for low-resource languages. Rather than collecting and annotating new hate speech data, we show how to use cross-lingual transfer learning to leverage already existing data from higher-resource l
Existing work on automated hate speech classification assumes that the dataset is fixed and the classes are pre-defined. However, the amount of data in social media increases every day, and the hot topics changes rapidly, requiring the classifiers to
We present a system for zero-shot cross-lingual offensive language and hate speech classification. The system was trained on English datasets and tested on a task of detecting hate speech and offensive social media content in a number of languages wi