أصبحت نماذج اللغة متعددة اللغات المحددة مسبقا أداة شائعة في تحويل قدرات NLP إلى لغات الموارد المنخفضة، وغالبا مع التعديلات.في هذا العمل، ندرس أداء، قابلية القابلية للضغط، والتفاعل بين اثنين من هذه التكيفات: تكبير المفردات وتروية النصوص.تقييماتنا حول العلامات بين الكلام، تحليل التبعية الشامل، والاعتراف الكياري المسمى في تسعة لغات متنوعة منخفضة الموارد تدعم صلاحية هذه الأساليب مع رفع أسئلة جديدة حول كيفية تكييف النماذج متعددة اللغات على النحو الأمثل إلى إعدادات الموارد المنخفضة.
Pretrained multilingual language models have become a common tool in transferring NLP capabilities to low-resource languages, often with adaptations. In this work, we study the performance, extensibility, and interaction of two such adaptations: vocabulary augmentation and script transliteration. Our evaluations on part-of-speech tagging, universal dependency parsing, and named entity recognition in nine diverse low-resource languages uphold the viability of these approaches while raising new questions around how to optimally adapt multilingual models to low-resource settings.
References used
https://aclanthology.org/
Pre-trained multilingual language models have become an important building block in multilingual Natural Language Processing. In the present paper, we investigate a range of such models to find out how well they transfer discourse-level knowledge acr
We analyze if large language models are able to predict patterns of human reading behavior. We compare the performance of language-specific and multilingual pretrained transformer models to predict reading time measures reflecting natural human sente
This paper studies zero-shot cross-lingual transfer of vision-language models. Specifically, we focus on multilingual text-to-video search and propose a Transformer-based model that learns contextual multilingual multimodal embeddings. Under a zero-s
Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-the-art performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not c
Abstract Prior studies in multilingual language modeling (e.g., Cotterell et al., 2018; Mielke et al., 2019) disagree on whether or not inflectional morphology makes languages harder to model. We attempt to resolve the disagreement and extend those s