تم الآن إنشاء أن نماذج اللغة العصبية الحديثة يمكن تدريبها بنجاح على لغات متعددة في وقت واحد دون تغييرات على الهندسة المعمارية الأساسية، وتوفير طريقة سهلة لتكييف مجموعة متنوعة من نماذج NLP لغات الموارد المنخفضة.ولكن ما نوع المعرفة المشتركة حقا بين اللغات داخل هذه النماذج؟هل يؤدي التدريب المتعدد اللغات في الغالب إلى محاذاة مساحات التمثيل المعجمية أو هل تمكن أيضا تقاسم المعرفة النحوية بحتة؟في هذه الورقة، نشرج أشكال مختلفة من التحويل المتبادل والبحث عن عواملها الأكثر تحديدا، باستخدام مجموعة متنوعة من النماذج والمهام التحقيق.نجد أن تعريض LMS لدينا بلغة ذات صلة لا تؤدي دائما إلى زيادة المعرفة النحوية في اللغة المستهدفة، وأن الظروف المثلى للنقل الدلالي المعجمي قد لا تكون الأمثل للتحويل النحوي.
It is now established that modern neural language models can be successfully trained on multiple languages simultaneously without changes to the underlying architecture, providing an easy way to adapt a variety of NLP models to low-resource languages. But what kind of knowledge is really shared among languages within these models? Does multilingual training mostly lead to an alignment of the lexical representation spaces or does it also enable the sharing of purely grammatical knowledge? In this paper we dissect different forms of cross-lingual transfer and look for its most determining factors, using a variety of models and probing tasks. We find that exposing our LMs to a related language does not always increase grammatical knowledge in the target language, and that optimal conditions for lexical-semantic transfer may not be optimal for syntactic transfer.
References used
https://aclanthology.org/
Supervised deep learning-based approaches have been applied to task-oriented dialog and have proven to be effective for limited domain and language applications when a sufficient number of training examples are available. In practice, these approache
Previous work mainly focuses on improving cross-lingual transfer for NLU tasks with a multilingual pretrained encoder (MPE), or improving the performance on supervised machine translation with BERT. However, it is under-explored that whether the MPE
Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervis
This paper studies zero-shot cross-lingual transfer of vision-language models. Specifically, we focus on multilingual text-to-video search and propose a Transformer-based model that learns contextual multilingual multimodal embeddings. Under a zero-s
Adapter modules have emerged as a general parameter-efficient means to specialize a pretrained encoder to new domains. Massively multilingual transformers (MMTs) have particularly benefited from additional training of language-specific adapters. Howe