في هذه الورقة، نستكشف آثار المتغيرات اللغوية، وأحجام البيانات، وأنواع المهام التي تم ضبطها بشكل جيد في نماذج اللغة العربية المدربة مسبقا.للقيام بذلك، نبني ثلاث نماذج لغوية مدربة مسبقا عبر ثلاثة متغيرات باللغة العربية: العربية القياسية العربية (MSA)، العربية، واللوجية العربية، بالإضافة إلى نموذج لغوي رابع مدرب مسبقا على مزيج من الثلاثةوبعدنحن ندرس أيضا أهمية حجم بيانات التدريب المسبق من خلال بناء نماذج إضافية مدربة مسبقا على مجموعة Scaled-Down من متغير MSA.قارنا نماذجنا المختلفة لبعضنا البعض، بالإضافة إلى ثمانية نماذج متاحة للجمهور من خلال ضبطها على خمس مهام NLP تمتد 12 مجموعة بيانات.تشير نتائجنا إلى أن القرب المتغير من بيانات التدريب المسبق لبيانات التوصيل الدقيق أكثر أهمية من حجم بيانات التدريب المسبق.نستمسى هذه البصيرة في تحديد نموذج اختيار نظام محسن للمهام التي تمت دراستها.
In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.
References used
https://aclanthology.org/
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words i
In this study, we propose a self-supervised learning method that distils representations of word meaning in context from a pre-trained masked language model. Word representations are the basis for context-aware lexical semantics and unsupervised sema
Pretrained language models (PTLMs) yield state-of-the-art performance on many natural language processing tasks, including syntax, semantics and commonsense. In this paper, we focus on identifying to what extent do PTLMs capture semantic attributes a
Large language models benefit from training with a large amount of unlabeled text, which gives them increasingly fluent and diverse generation capabilities. However, using these models for text generation that takes into account target attributes, su
Modern transformer-based language models are revolutionizing NLP. However, existing studies into language modelling with BERT have been mostly limited to English-language material and do not pay enough attention to the implicit knowledge of language,