اكتشاف أجزاء من الجملة المساهمة في سمية الجملة - - بدلا من توفير حكم على مستوى الجملة من البهمة --- من شأنه أن يزيد من تفسير النماذج والسماح للمشرفين البشري بفهم نواتج النظام بشكل أفضل.تقدم هذه الورقة فريقنا، UTNLP، منهجية ونتائج المهمة المشتركة SEMEVAL-2021 5 على الكشف عن الأمور السامة.نقوم باختبار نماذج متعددة وإدارات سياقية وأبلغ عن أفضل الإعداد من الجميع.تبدأ التجارب بنماذج قائمة على الكلمات الرئيسية ويتبعها نماذج القائم على الكيانات المستندة إلى الكيانات المستندة إلى مجال الانتباه، والتي تستند إلى الكيانات المستندة إلى الكيان ومقرها المحولات.أفضل نهجنا، نموذج الفرقة، يحقق F1 من 0.684 في مرحلة تقييم المسابقة.
Detecting which parts of a sentence contribute to that sentence's toxicity---rather than providing a sentence-level verdict of hatefulness--- would increase the interpretability of models and allow human moderators to better understand the outputs of the system. This paper presents our team's, UTNLP, methodology and results in the SemEval-2021 shared task 5 on toxic spans detection. We test multiple models and contextual embeddings and report the best setting out of all. The experiments start with keyword-based models and are followed by attention-based, named entity- based, transformers-based, and ensemble models. Our best approach, an ensemble model, achieves an F1 of 0.684 in the competition's evaluation phase.
References used
https://aclanthology.org/
This paper describes the participation of SINAI team at Task 5: Toxic Spans Detection which consists of identifying spans that make a text toxic. Although several resources and systems have been developed so far in the context of offensive language,
Toxic language is often present in online forums, especially when politics and other polarizing topics arise, and can lead to people becoming discouraged from joining or continuing conversations. In this paper, we use data consisting of comments with
This paper presents a system used for SemEval-2021 Task 5: Toxic Spans Detection. Our system is an ensemble of BERT-based models for binary word classification, trained on a dataset extended by toxic comments modified and generated by two language mo
With the rapid growth in technology, social media activity has seen a boom across all age groups. It is humanly impossible to check all the tweets, comments and status manually whether they follow proper community guidelines. A lot of toxicity is reg
Recurrent Neural Networks (RNN) have been widely used in various Natural Language Processing (NLP) tasks such as text classification, sequence tagging, and machine translation. Long Short Term Memory (LSTM), a special unit of RNN, has the benefit of