تقدم هذه الورقة التقديم الخاص بنا إلى مهمة Semeval-2021 5: الكشف عن الأمور السامة.الغرض من هذه المهمة هو اكتشاف المواقف التي تجعل النص ساما، وهو عمل معقد لعدة أسباب.أولا، بسبب الذاتية الجوهرية للسمية، وثانيا، بسبب السمية لا تأتي دائما من كلمات مفردة مثل الإهانات أو التمثيل، ولكن في بعض الأحيان من التعبيرات بأكملها تشكلت بكلمات قد لا تكون سامة بشكل فردي.بعد هذه الفكرة التركيز على كل من الكلمات المفردة وتعبيرات متعددة الكلمة، ندرس تأثير استخدام نموذج مستعمل متعدد العميم، والذي يستخدم embeddings من طبقات مختلفة لتقدير السمية النهائية لكل رمزية.تظهر النتائج الكمية لدينا أن استخدام المعلومات من أعماق متعددة يعزز أداء النموذج.أخيرا، نقوم أيضا بتحليل أفضل نموذج لدينا نوعيا.
This paper presents our submission to SemEval-2021 Task 5: Toxic Spans Detection. The purpose of this task is to detect the spans that make a text toxic, which is a complex labour for several reasons. Firstly, because of the intrinsic subjectivity of toxicity, and secondly, due to toxicity not always coming from single words like insults or offends, but sometimes from whole expressions formed by words that may not be toxic individually. Following this idea of focusing on both single words and multi-word expressions, we study the impact of using a multi-depth DistilBERT model, which uses embeddings from different layers to estimate the final per-token toxicity. Our quantitative results show that using information from multiple depths boosts the performance of the model. Finally, we also analyze our best model qualitatively.
References used
https://aclanthology.org/
In this paper, we describe our system used for SemEval 2021 Task 5: Toxic Spans Detection. Our proposed system approaches the problem as a token classification task. We trained our model to find toxic words and concatenate their spans to predict the
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
This paper describes the participation of SINAI team at Task 5: Toxic Spans Detection which consists of identifying spans that make a text toxic. Although several resources and systems have been developed so far in the context of offensive language,
Detection of toxic spans - detecting toxicity of contents in the granularity of tokens - is crucial for effective moderation of online discussions. The baseline approach for this problem using the transformer model is to add a token classification he
In recent years, the widespread use of social media has led to an increase in the generation of toxic and offensive content on online platforms. In response, social media platforms have worked on developing automatic detection methods and employing h