السمية منتشرة في وسائل التواصل الاجتماعي وتشكل تهديدا كبيرا لصحة المجتمعات عبر الإنترنت.أدت مقدمة أحدث نماذج اللغة المدربة مسبقا، والتي حققت نتائج أحدث من المهام في العديد من المهام NLP، الطريقة التي نقترب بها معالجة اللغة الطبيعية.ومع ذلك، فإن الطبيعة الكامنة للتدريب المسبق تعني أنها من غير المرجح أن تلتقط المعلومات الإحصائية الخاصة بمهام المهام أو تعلم المعرفة الخاصة بالمجال.بالإضافة إلى ذلك، لا تستخدم معظم تطبيقات هذه النماذج الحقول العشوائية الشرطية، وهي طريقة لتصنيف الرمز المميز في وقت واحد.نظظ أن هذه التعديلات يمكن أن تحسن الأداء النموذجي على مهمة الكشف عن المسافة السامة في Semeval-2021 لتحقيق درجة في غضون 4 نقاط مئوية من أعلى فريق الأداء.
Toxicity is pervasive in social media and poses a major threat to the health of online communities. The recent introduction of pre-trained language models, which have achieved state-of-the-art results in many NLP tasks, has transformed the way in which we approach natural language processing. However, the inherent nature of pre-training means that they are unlikely to capture task-specific statistical information or learn domain-specific knowledge. Additionally, most implementations of these models typically do not employ conditional random fields, a method for simultaneous token classification. We show that these modifications can improve model performance on the Toxic Spans Detection task at SemEval-2021 to achieve a score within 4 percentage points of the top performing team.
References used
https://aclanthology.org/
This paper presents our system submission to task 5: Toxic Spans Detection of the SemEval-2021 competition. The competition aims at detecting the spans that make a toxic span toxic. In this paper, we demonstrate our system for detecting toxic spans,
In this paper, we describe our system used for SemEval 2021 Task 5: Toxic Spans Detection. Our proposed system approaches the problem as a token classification task. We trained our model to find toxic words and concatenate their spans to predict the
This paper describes the system developed by the Antwerp Centre for Digital humanities and literary Criticism [UAntwerp] for toxic span detection. We used a stacked generalisation ensemble of five component models, with two distinct interpretations o
Predicting the complexity level of a word or a phrase is considered a challenging task. It is even recognized as a crucial step in numerous NLP applications, such as text rearrangements and text simplification. Early research treated the task as a bi
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words i