تم إدخال نماذج اللغة القائمة على المحولات خطوة ثورية لأبحاث معالجة اللغة الطبيعية (NLP). أدت هذه النماذج، مثل Bert، GPT و Electra، إلى أداء أحدث في العديد من مهام NLP. تم تطوير معظم هذه النماذج في البداية للغة الإنجليزية ولغات أخرى تبعها لاحقا. في الآونة الأخيرة، بدأت عدة نماذج عربية خاصة الناشئة. ومع ذلك، هناك مقارنات محدودة مباشرة بين هذه النماذج. في هذه الورقة، نقيم أداء 24 من هذه النماذج على المعنويات العربية والكشف عن السخرية. تظهر نتائجنا أن النماذج التي تحققت أفضل أداء هي تلك التي يتم تدريبها على البيانات العربية فقط، بما في ذلك اللغة العربية ذاتي، واستخدام عدد أكبر من المعلمات، مثل Marbert صدر مؤخرا. ومع ذلك، لاحظنا أن ARAELECTRA هي واحدة من أفضل النماذج الأدائية بينما تكون أكثر كفاءة في تكلفتها الحسابية. أخيرا، أظهرت التجارب على المتغيرات Aragpt2 أداء منخفضة مقارنة بنماذج Bert، مما يشير إلى أنه قد لا يكون مناسبا لمهام التصنيف.
The introduction of transformer-based language models has been a revolutionary step for natural language processing (NLP) research. These models, such as BERT, GPT and ELECTRA, led to state-of-the-art performance in many NLP tasks. Most of these models were initially developed for English and other languages followed later. Recently, several Arabic-specific models started emerging. However, there are limited direct comparisons between these models. In this paper, we evaluate the performance of 24 of these models on Arabic sentiment and sarcasm detection. Our results show that the models achieving the best performance are those that are trained on only Arabic data, including dialectal Arabic, and use a larger number of parameters, such as the recently released MARBERT. However, we noticed that AraELECTRA is one of the top performing models while being much more efficient in its computational cost. Finally, the experiments on AraGPT2 variants showed low performance compared to BERT models, which indicates that it might not be suitable for classification tasks.
References used
https://aclanthology.org/
Within the last few years, the number of Arabic internet users and Arabic online content is in exponential growth. Dealing with Arabic datasets and the usage of non-explicit sentences to express an opinion are considered to be the major challenges in
This work demonstrates the development process of a machine learning architecture for inference that can scale to a large volume of requests. We used a BERT model that was fine-tuned for emotion analysis, returning a probability distribution of emoti
Sentiment classification and sarcasm detection attract a lot of attention by the NLP research community. However, solving these two problems in Arabic and on the basis of social network data (i.e., Twitter) is still of lower interest. In this paper w
We describe our submitted system to the 2021 Shared Task on Sarcasm and Sentiment Detection in Arabic (Abu Farha et al., 2021). We tackled both subtasks, namely Sarcasm Detection (Subtask 1) and Sentiment Analysis (Subtask 2). We used state-of-the-ar
Sarcasm detection is one of the top challenging tasks in text classification, particularly for informal Arabic with high syntactic and semantic ambiguity. We propose two systems that harness knowledge from multiple tasks to improve the performance of