غالبا ما تكون اللغة السامة موجودة في المنتديات عبر الإنترنت، خاصة عندما تنشأ السياسة وغيرها من الموضوعات الاستقطابية، ويمكن أن تؤدي إلى أن يصبحوا محبطين من الانضمام إلى المحادثات أو الاستمرار فيها.في هذه الورقة، نستخدم البيانات التي تتألف من تعليقات مع مؤشرات النص السام المسمى لتدريب RNN لردع الألغام التي تجعلها أجزاء من التعليقات تجعلها سامة، والتي يمكن أن تساعد المشرفين عبر الإنترنت.نقارن النتائج باستخدام مجموعة البيانات الأصلية ومجموعة معدات، وكذلك نماذج Gru مقابل LSTM RNN.
Toxic language is often present in online forums, especially when politics and other polarizing topics arise, and can lead to people becoming discouraged from joining or continuing conversations. In this paper, we use data consisting of comments with the indices of toxic text labelled to train an RNN to deter-mine which parts of the comments make them toxic, which could aid online moderators. We compare results using both the original dataset and an augmented set, as well as GRU versus LSTM RNN models.
References used
https://aclanthology.org/
The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gol
This paper describes the participation of SINAI team at Task 5: Toxic Spans Detection which consists of identifying spans that make a text toxic. Although several resources and systems have been developed so far in the context of offensive language,
This paper presents a system used for SemEval-2021 Task 5: Toxic Spans Detection. Our system is an ensemble of BERT-based models for binary word classification, trained on a dataset extended by toxic comments modified and generated by two language mo
Detecting which parts of a sentence contribute to that sentence's toxicity---rather than providing a sentence-level verdict of hatefulness--- would increase the interpretability of models and allow human moderators to better understand the outputs of
Detection of toxic spans - detecting toxicity of contents in the granularity of tokens - is crucial for effective moderation of online discussions. The baseline approach for this problem using the transformer model is to add a token classification he