في هذا العمل، نقدم نهجنا ونتائجنا لمهمة Semeval-2021 للكشف عن الفقاعات السامة.كان الهدف الرئيسي للمهمة هو تحديد المواقيات التي يمكن أن تعزى سمية نص معين.المهمة تحديا أساسا بسبب قيود اثنين: مجموعة بيانات التدريب الصغيرة وتوزيع الفئة غير المتوازنة.تقوم ورقتنا بالتحقيق في تقنيين، وتعلم شبه إشراف وتعلم مع فقدان النرد ضبط النفس، لمعالجة هذه التحديات.يتألف نظامنا المقدم (المرتبة التاسعة على متن القائد) من مجموعة من مختلف نماذج اللغة المحولات المدربة مسبقا تدربت باستخدام أي من التقنيات المذكورة أعلاه.
In this work, we present our approach and findings for SemEval-2021 Task 5 - Toxic Spans Detection. The task's main aim was to identify spans to which a given text's toxicity could be attributed. The task is challenging mainly due to two constraints: the small training dataset and imbalanced class distribution. Our paper investigates two techniques, semi-supervised learning and learning with Self-Adjusting Dice Loss, for tackling these challenges. Our submitted system (ranked ninth on the leader board) consisted of an ensemble of various pre-trained Transformer Language Models trained using either of the above-proposed techniques.
References used
https://aclanthology.org/
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
Detection of toxic spans - detecting toxicity of contents in the granularity of tokens - is crucial for effective moderation of online discussions. The baseline approach for this problem using the transformer model is to add a token classification he
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans Detection. The purpose of this task is to detect the spans that make a text toxic, which is a complex labour for several reasons. Firstly, because of the intrinsic subjectivity of
This paper presents a system used for SemEval-2021 Task 5: Toxic Spans Detection. Our system is an ensemble of BERT-based models for binary word classification, trained on a dataset extended by toxic comments modified and generated by two language mo
This paper describes the participation of SINAI team at Task 5: Toxic Spans Detection which consists of identifying spans that make a text toxic. Although several resources and systems have been developed so far in the context of offensive language,