أصبح استخدام آليات الاهتمام في أساليب التعلم العميق شعبية في معالجة اللغة الطبيعية بسبب أدائه المعلقة. يسمح باستخدام هذه الآليات إلى إحدى الأهمية لإدارة أهمية عناصر التسلسل وفقا لسياقها، ومع ذلك، فقد تمت ملاحظتها هذه الأهمية بشكل مستقل بين أزواج عناصر التسلسل (اهتمام الذات) وبين مجال تطبيق التسلسل (الاهتمام السياقي)، مما يؤدي إلى فقد المعلومات ذات الصلة والحد من تمثيل التسلسلات. لمعالجة هذه القضايا الخاصة هذه نقترح آلية الاهتمام الذاتي الذاتي، والتي تتداول قبالة القيود السابقة، من خلال النظر في العلاقات الداخلية والسياقية بين عناصر التسلسل. تم تقييم الآلية المقترحة في أربع مجموعات قياسية لتحقيق مهمة تحديد اللغة المسيئة لتحقيق النتائج المشجعة. تفوقت على آليات الاهتمام الحالية وأظهرت أداء تنافسي فيما يتعلق بالنهج الحديثة من بين الفن.
The use of attention mechanisms in deep learning approaches has become popular in natural language processing due to its outstanding performance. The use of these mechanisms allows one managing the importance of the elements of a sequence in accordance to their context, however, this importance has been observed independently between the pairs of elements of a sequence (self-attention) and between the application domain of a sequence (contextual attention), leading to the loss of relevant information and limiting the representation of the sequences. To tackle these particular issues we propose the self-contextualized attention mechanism, which trades off the previous limitations, by considering the internal and contextual relationships between the elements of a sequence. The proposed mechanism was evaluated in four standard collections for the abusive language identification task achieving encouraging results. It outperformed the current attention mechanisms and showed a competitive performance with respect to state-of-the-art approaches.
References used
https://aclanthology.org/
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social medi
As socially unacceptable language become pervasive in social media platforms, the need for automatic content moderation become more pressing. This contribution introduces the Dutch Abusive Language Corpus (DALC v1.0), a new dataset with tweets manual
We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we hav
Nowadays, social media platforms use classification models to cope with hate speech and abusive language. The problem of these models is their vulnerability to bias. A prevalent form of bias in hate speech and abusive language datasets is annotator b
The state-of-the-art abusive language detection models report great in-corpus performance, but underperform when evaluated on abusive comments that differ from the training scenario. As human annotation involves substantial time and effort, models th