تكتسب نماذج اللغة المحددة مسبقا بسرعة شعبية بسرعة في أنظمة NLP للغات غير الإنجليزية.تتميز معظم هذه النماذج بخطوة أخذ عينات مهمة مهمة في عملية تتراكم بيانات التدريب بلغات مختلفة، للتأكد من أن الإشارة من لغات الموارد الأفضل لا تغرق منها أكثر الموارد.في هذه الدراسة، ندرب العديد من النماذج اللغوية المتكررة متعددة اللغات، بناء على بنية ELMO، وتحليل تأثير نسب حجم Corpus المتغير على الأداء المصب، بالإضافة إلى اختلاف الأداء بين نماذج أحادية الألوان لكل لغة، ونماذج لغة متعددة اللغات الأوسعوبعدكجزء من هذا الجهد، نجعل هذه النماذج المدربة المتاحة للاستخدام العام.
Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages. Most of these models feature an important corpus sampling step in the process of accumulating training data in different languages, to ensure that the signal from better resourced languages does not drown out poorly resourced ones. In this study, we train multiple multilingual recurrent language models, based on the ELMo architecture, and analyse both the effect of varying corpus size ratios on downstream performance, as well as the performance difference between monolingual models for each language, and broader multilingual language models. As part of this effort, we also make these trained models available for public use.
References used
https://aclanthology.org/
In this paper, we present work in progress aimed at the development of a new image dataset with annotated objects. The Multilingual Image Corpus consists of an ontology of visual objects (based on WordNet) and a collection of thematically related ima
In most of neural machine translation distillation or stealing scenarios, the highest-scoring hypothesis of the target model (teacher) is used to train a new model (student). If reference translations are also available, then better hypotheses (with
In image captioning, multiple captions are often provided as ground truths, since a valid caption is not always uniquely determined. Conventional methods randomly select a single caption and treat it as correct, but there have been few effective trai
Deep-learning models for language generation tasks tend to produce repetitive output. Various methods have been proposed to encourage lexical diversity during decoding, but this often comes at a cost to the perceived fluency and adequacy of the outpu
We address the annotation data bottleneck for sequence classification. Specifically we ask the question: if one has a budget of N annotations, which samples should we select for annotation? The solution we propose looks for diversity in the selected