على الرغم من النجاحات الأخيرة للنماذج المحول القائمة على المحولات من حيث الفعالية في مجموعة متنوعة من المهام، غالبا ما تظل قراراتها مبهمة للبشر.تعتبر التفسيرات مهمة بشكل خاص للمهام مثل اللغة الهجومية أو الكشف عن السمية على وسائل التواصل الاجتماعي لأن عملية الاستئناف اليدوية غالبا ما تكون في مكانها للتنازل عن المحتوى الموضح تلقائيا.في هذا العمل، نقترح تقنية لتحسين إمكانية تفسير هذه النماذج، بناء على افتراض بسيط وقوي: منشور على الأقل سميك مثل أكثر سبتمته سامة.نحن ندمج هذا الافتراض في نماذج المحولات من خلال تسجيل وظيفة بناء على الحد الأقصى للسمية من يمتد ويزيد عملية التدريب لتحديد المواقف الصحيحة.نجد هذا النهج فعال ويمكننا إنتاج تفسيرات تتجاوز جودة تلك المنصوص عليها في تحليل الانحدار اللوجستي (غالبا ما تعتبر نموذجا واضحا للغاية)، وفقا لدراسة بشرية.
Despite the recent successes of transformer-based models in terms of effectiveness on a variety of tasks, their decisions often remain opaque to humans. Explanations are particularly important for tasks like offensive language or toxicity detection on social media because a manual appeal process is often in place to dispute automatically flagged content. In this work, we propose a technique to improve the interpretability of these models, based on a simple and powerful assumption: a post is at least as toxic as its most toxic span. We incorporate this assumption into transformer models by scoring a post based on the maximum toxicity of its spans and augmenting the training process to identify correct spans. We find this approach effective and can produce explanations that exceed the quality of those provided by Logistic Regression analysis (often regarded as a highly-interpretable model), according to a human study.
References used
https://aclanthology.org/
We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules (Clark et al., 2020). A recent work, named PRover (Saha et al., 2020), performs such reasoning by an
We report on our submission to Task 1 of the GermEval 2021 challenge -- toxic comment classification. We investigate different ways of bolstering scarce training data to improve off-the-shelf model performance on a toxic comment classification task.
Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic computation and memory requirements with respect to sequence length. Successful approaches to re
Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic meth
The style transfer task (here style is used in a broad authorial'' sense with many aspects including register, sentence structure, and vocabulary choice) takes text input and rewrites it in a specified target style preserving the meaning, but alterin