تقدم هذه المقالة وصف نظام فريق المحور، الذي يفسر العمل ذي الصلة والنتائج التجريبية لمشاركة فريقنا في مهمة Semeval 2021 5: الكشف السام يمتد.تأتي بيانات هذه المهمة المشتركة من بعض المشاركات على الإنترنت.الهدف المهمة هو تحديد المحتوى السام الوارد في هذه البيانات النصية.نحتاج إلى إيجاد فترة النص السام في البيانات النصية بدقة قدر الإمكان.في نفس المنصب، قد يكون النص السام فقيرا واحدا أو فقرات متعددة.يستخدم فريقنا مخطط التصنيف بناء على مستوى Word لإنجاز هذه المهمة.النظام الذي اعتدنا على تقديم النتائج هو Albert + Bilstm + CRF.مؤشر تقييم النتيجة لتقديم المهمة هو درجة F1، والنتيجة النهائية للنتيجة التنبؤية لمجموعة الاختبار المقدمة من فريقنا هي 0.6640226029.
This article introduces the system description of the hub team, which explains the related work and experimental results of our team's participation in SemEval 2021 Task 5: Toxic Spans Detection. The data for this shared task comes from some posts on the Internet. The task goal is to identify the toxic content contained in these text data. We need to find the span of the toxic text in the text data as accurately as possible. In the same post, the toxic text may be one paragraph or multiple paragraphs. Our team uses a classification scheme based on word-level to accomplish this task. The system we used to submit the results is ALBERT+BILSTM+CRF. The result evaluation index of the task submission is the F1 score, and the final score of the prediction result of the test set submitted by our team is 0.6640226029.
References used
https://aclanthology.org/
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
This paper introduces the system description of the hub team, which explains the related work and experimental results of our team's participation in SemEval 2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC). The da
Toxic language is often present in online forums, especially when politics and other polarizing topics arise, and can lead to people becoming discouraged from joining or continuing conversations. In this paper, we use data consisting of comments with
This paper describes the system developed by the Antwerp Centre for Digital humanities and literary Criticism [UAntwerp] for toxic span detection. We used a stacked generalisation ensemble of five component models, with two distinct interpretations o
This paper presents our system submission to task 5: Toxic Spans Detection of the SemEval-2021 competition. The competition aims at detecting the spans that make a toxic span toxic. In this paper, we demonstrate our system for detecting toxic spans,