تقدم هذه الورقة نظام يستخدم لمهمة Semeval-2021 5: الكشف عن المسافة السامة.نظامنا هو مجموعة من النماذج القائمة على بيرت لتصنيف الكلمة الثنائية، مدربة على مجموعة بيانات تمتد بواسطة التعليقات السامة المعدلة وتولدها نماذج لغتين.بالنسبة لتصنيف الكلمة السامة، تم تحسين قيمة عتبة التنبؤ بشكل منفصل لكل تعليق، من أجل زيادة قيمة F1 المتوقعة.
This paper presents a system used for SemEval-2021 Task 5: Toxic Spans Detection. Our system is an ensemble of BERT-based models for binary word classification, trained on a dataset extended by toxic comments modified and generated by two language models. For the toxic word classification, the prediction threshold value was optimized separately for every comment, in order to maximize the expected F1 value.
References used
https://aclanthology.org/
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
Detection of toxic spans - detecting toxicity of contents in the granularity of tokens - is crucial for effective moderation of online discussions. The baseline approach for this problem using the transformer model is to add a token classification he
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans Detection. The purpose of this task is to detect the spans that make a text toxic, which is a complex labour for several reasons. Firstly, because of the intrinsic subjectivity of
In this work, we present our approach and findings for SemEval-2021 Task 5 - Toxic Spans Detection. The task's main aim was to identify spans to which a given text's toxicity could be attributed. The task is challenging mainly due to two constraints:
This paper describes the participation of SINAI team at Task 5: Toxic Spans Detection which consists of identifying spans that make a text toxic. Although several resources and systems have been developed so far in the context of offensive language,