اكتشاف المواقف السامة - اكتشاف سمية المحتويات في حبيبتي الرموز - أمر حاسم للاعتدال الفعال للمناقشات عبر الإنترنت.تتمثل النهج الأساسي في هذه المشكلة في استخدام نموذج المحول في إضافة رأس تصنيف رمزي إلى طراز اللغة وضبط الطبقات الدقيقة مع مجموعة بيانات المسمى الرمز المميز.واحدة من قيود مثل هذا النهج الأساسي هي ندرة البيانات المسمى.لتحسين النتائج، درسنا الاستفادة من مجموعات البيانات العامة الحالية للحصول على مهمة ذات صلة ولكن مختلفة بتصنيف التعليق / الجملة بأكملها.نقترح مقارنتين: النهج الأول نماذج محول بخامة مدربة مسبقا في عينات تصنيف الجملة.في النهج الثاني، نقوم بإجراء إشراف ضعيف مع الاهتمام اللين لتعلم تسميات مستوى الرموز من ملصقات الجملة.تجاربنا تظهر التحسينات في درجة F1 عبر النهج الأساسي.تم إصدار التنفيذ علنا.
Detection of toxic spans - detecting toxicity of contents in the granularity of tokens - is crucial for effective moderation of online discussions. The baseline approach for this problem using the transformer model is to add a token classification head to the language model and fine-tune the layers with the token labeled dataset. One of the limitations of such a baseline approach is the scarcity of labeled data. To improve the results, We studied leveraging existing public datasets for a related but different task of entire comment/sentence classification. We propose two approaches: the first approach fine-tunes transformer models that are pre-trained on sentence classification samples. In the second approach, we perform weak supervision with soft attention to learn token level labels from sentence labels. Our experiments show improvements in the F1 score over the baseline approach. The implementation has been released publicly.
References used
https://aclanthology.org/
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
In this work, we present our approach and findings for SemEval-2021 Task 5 - Toxic Spans Detection. The task's main aim was to identify spans to which a given text's toxicity could be attributed. The task is challenging mainly due to two constraints:
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans Detection. The purpose of this task is to detect the spans that make a text toxic, which is a complex labour for several reasons. Firstly, because of the intrinsic subjectivity of
This paper presents a system used for SemEval-2021 Task 5: Toxic Spans Detection. Our system is an ensemble of BERT-based models for binary word classification, trained on a dataset extended by toxic comments modified and generated by two language mo
In this paper, we describe our system used for SemEval 2021 Task 5: Toxic Spans Detection. Our proposed system approaches the problem as a token classification task. We trained our model to find toxic words and concatenate their spans to predict the