في هذه الورقة، نصف نظامنا المستخدمة في مهمة Semeval 2021 5: الكشف عن الأمور السامة.ينتهك نظامنا المقترح من مشكلة مهمة تصنيف رمزية.قمنا بتدريب نموذجنا للعثور على كلمات سامة وتسلسل يمتد إلى التنبؤ باليوفق السام في غضون جملة.نحن نطبات نماذج اللغة المدربة مسبقا (PLMS) لتحديد الكلمات السامة.بالنسبة للضبط الدقيق، كدغ طبقة التصنيف أعلى ميزات PLM لكل كلمة لتصنيفها إذا كانت سامة أم لا.يتم تدريب PLMS مسبقا على استخدام أهداف مختلفة وقد يختلف أدائها في مهام المصب.لذلك، قارن أداء بيرت، Electra، روبرتا، XLM-ROBERTA، T5، XLNET، و MPNET لتحديد المواقف السامة في غضون جملة.أفضل نظام أداء لدينا يستخدم روبرتا.أداء جيدا، وتحقيق درجة F1 من 0.6841 وتأمين مرتبة 16 على المتصدرين الرسميين.
In this paper, we describe our system used for SemEval 2021 Task 5: Toxic Spans Detection. Our proposed system approaches the problem as a token classification task. We trained our model to find toxic words and concatenate their spans to predict the toxic spans within a sentence. We fine-tuned Pre-trained Language Models (PLMs) for identifying the toxic words. For fine-tuning, we stacked the classification layer on top of the PLM features of each word to classify if it is toxic or not. PLMs are pre-trained using different objectives and their performance may differ on downstream tasks. We, therefore, compare the performance of BERT, ELECTRA, RoBERTa, XLM-RoBERTa, T5, XLNet, and MPNet for identifying toxic spans within a sentence. Our best performing system used RoBERTa. It performed well, achieving an F1 score of 0.6841 and secured a rank of 16 on the official leaderboard.
References used
https://aclanthology.org/
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans Detection. The purpose of this task is to detect the spans that make a text toxic, which is a complex labour for several reasons. Firstly, because of the intrinsic subjectivity of
Detection of toxic spans - detecting toxicity of contents in the granularity of tokens - is crucial for effective moderation of online discussions. The baseline approach for this problem using the transformer model is to add a token classification he
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
Toxicity is pervasive in social media and poses a major threat to the health of online communities. The recent introduction of pre-trained language models, which have achieved state-of-the-art results in many NLP tasks, has transformed the way in whi
In this work, we present our approach and findings for SemEval-2021 Task 5 - Toxic Spans Detection. The task's main aim was to identify spans to which a given text's toxicity could be attributed. The task is challenging mainly due to two constraints: