تقدم هذه الورقة استراتيجيتنا لمعالجة المهمة المشتركة EACL WANLP-2021: السخرية والكشف عن المعنويات.يهدف أحد المهن الفرعية إلى تطوير نظام يحدد ما إذا كانت سقسقة عربية معينة ساخرة في الطبيعة أم لا، في حين أن الآخر يهدف إلى تحديد مشاعر سقسقة اللغة العربية.نحن نقترب من المهمة في خطوتين.تتضمن الخطوة الأولى مسبقا لمعلومات البيانات المقدمة من خلال إجراء الإدراج والحذف وعمليات التجزئة في أجزاء مختلفة من النص.تنطوي الخطوة الثانية على تجربة متغيرات متعددة من نماذج محولتين، Araelectra وعربت.تم تصنيف نهجنا النهائي في المرتبة السابعة والرابعة في المهاجمين والكشف عن المشاعر الفرعية على التوالي.
This paper presents our strategy to tackle the EACL WANLP-2021 Shared Task 2: Sarcasm and Sentiment Detection. One of the subtasks aims at developing a system that identifies whether a given Arabic tweet is sarcastic in nature or not, while the other aims to identify the sentiment of the Arabic tweet. We approach the task in two steps. The first step involves pre processing the provided dataset by performing insertions, deletions and segmentation operations on various parts of the text. The second step involves experimenting with multiple variants of two transformer based models, AraELECTRA and AraBERT. Our final approach was ranked seventh and fourth in the Sarcasm and Sentiment Detection subtasks respectively.
References used
https://aclanthology.org/
This paper presents our approach to address the EACL WANLP-2021 Shared Task 1: Nuanced Arabic Dialect Identification (NADI). The task is aimed at developing a system that identifies the geographical location(country/province) from where an Arabic twe
Sarcasm detection is one of the top challenging tasks in text classification, particularly for informal Arabic with high syntactic and semantic ambiguity. We propose two systems that harness knowledge from multiple tasks to improve the performance of
Within the last few years, the number of Arabic internet users and Arabic online content is in exponential growth. Dealing with Arabic datasets and the usage of non-explicit sentences to express an opinion are considered to be the major challenges in
The introduction of transformer-based language models has been a revolutionary step for natural language processing (NLP) research. These models, such as BERT, GPT and ELECTRA, led to state-of-the-art performance in many NLP tasks. Most of these mode
Since their inception, transformer-based language models have led to impressive performance gains across multiple natural language processing tasks. For Arabic, the current state-of-the-art results on most datasets are achieved by the AraBERT languag