تحتوي التعليقات السامة على أشكال لغة غير مقبولة مستهدفة نحو مجموعات أو أفراد.تصبح هذه الأنواع من التعليقات مصدر قلق خطير للمنظمات الحكومية والمجتمعات عبر الإنترنت ومنصات وسائل التواصل الاجتماعي.على الرغم من وجود بعض الأساليب للتعامل مع اللغة غير المقبولة، فإن معظمها يركز على التعلم الإشراف واللغة الإنجليزية.في هذه الورقة، نتعامل مع اكتشاف التعليق السام كاستراتيجية شبه مشتركة على رسم بياني غير متجانس.نقوم بتقييم النهج على مجموعة بيانات سامة من اللغة البرتغالية، مما يتفوق على العديد من الأساليب القائمة على الرسم البياني وتحقيق نتائج تنافسية مقارنة بمناطق المحولات.
Toxic comments contain forms of non-acceptable language targeted towards groups or individuals. These types of comments become a serious concern for government organizations, online communities, and social media platforms. Although there are some approaches to handle non-acceptable language, most of them focus on supervised learning and the English language. In this paper, we deal with toxic comment detection as a semi-supervised strategy over a heterogeneous graph. We evaluate the approach on a toxic dataset of the Portuguese language, outperforming several graph-based methods and achieving competitive results compared to transformer architectures.
References used
https://aclanthology.org/
In this work, we present our approach and findings for SemEval-2021 Task 5 - Toxic Spans Detection. The task's main aim was to identify spans to which a given text's toxicity could be attributed. The task is challenging mainly due to two constraints:
Aimed at generating a seed lexicon for use in downstream natural language tasks and unsupervised methods for bilingual lexicon induction have received much attention in the academic literature recently. While interesting and fully unsupervised settin
Metaphors are ubiquitous in natural language, and detecting them requires contextual reasoning about whether a semantic incongruence actually exists. Most existing work addresses this problem using pre-trained contextualized models. Despite their suc
This paper addresses the identification of toxic, engaging, and fact-claiming comments on social media. We used the dataset made available by the organizers of the GermEval2021 shared task containing over 3,000 manually annotated Facebook comments in
The availability of language representations learned by large pretrained neural network models (such as BERT and ELECTRA) has led to improvements in many downstream Natural Language Processing tasks in recent years. Pretrained models usually differ i