تعد اعتدال تعليقات القارئ مشكلة كبيرة لمنصات الأخبار عبر الإنترنت.هنا، نقوم بتجربة النماذج للاعتدال التلقائي، باستخدام مجموعة بيانات من التعليقات من جريدة كرواتية شعبية.يوضح تحليلنا أنه في حين أن التعليقات التي تنتهك القواعد الاعتدالية تشترك في الغالب على ميزات اللغوية والمواضيعية المشتركة، يختلف محتواها عبر أقسام الجريدة المختلفة.لذلك، نجعل نماذجنا تدرك موضوعنا، دمج الميزات الدلالية من نموذج موضوع في قرار التصنيف.تظهر نتائجنا أن معلومات الموضوع تعمل على تحسين أداء النموذج، ويزيد من ثقتها في المخرجات الصحيحة، وتساعدنا على فهم مخرجات النموذج.
Moderation of reader comments is a significant problem for online news platforms. Here, we experiment with models for automatic moderation, using a dataset of comments from a popular Croatian newspaper. Our analysis shows that while comments that violate the moderation rules mostly share common linguistic and thematic features, their content varies across the different sections of the newspaper. We therefore make our models topic-aware, incorporating semantic features from a topic model into the classification decision. Our results show that topic information improves the performance of the model, increases its confidence in correct outputs, and helps us understand the model's outputs.
References used
https://aclanthology.org/
Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving perf
Different linearizations have been proposed to cast dependency parsing as sequence labeling and solve the task as: (i) a head selection problem, (ii) finding a representation of the token arcs as bracket strings, or (iii) associating partial transiti
Today, news media organizations regularly engage with readers by enabling them to comment on news articles. This creates the need for comment moderation and removal of disallowed comments -- a time-consuming task often performed by human moderators.
Neural topic models (NTMs) apply deep neural networks to topic modelling. Despite their success, NTMs generally ignore two important aspects: (1) only document-level word count information is utilized for the training, while more fine-grained sentenc
When developing topic models, a critical question that should be asked is: How well will this model work in an applied setting? Because standard performance evaluation of topic interpretability uses automated measures modeled on human evaluation test