اقترحت الدراسات الحديثة طرق مختلفة لتحسين تمثيلات الكلمات متعددة اللغات في الإعدادات السياقية بما في ذلك التقنيات التي تتماشى بين المساحات المصدر والهدف المستهدف.بالنسبة للمشروعات السياقية، تصبح المحاذاة أكثر تعقيدا كما نستفيد إلى السياق بالإضافة إلى ذلك.في هذا العمل، نقترح استخدام النقل الأمثل (OT) كهدف محاذاة أثناء ضبط الدقيقة لزيادة تحسين تمثيلات محاكية متعددة اللغات للتحويل المتبادل عبر اللغات.لا يتطلب هذا النهج أزواج محاذاة Word قبل ضبط الرصيف الذي قد يؤدي إلى مطابقة فرعية مثالية ويتعلم بدلا من محاذاة الكلمة في السياق بطريقة غير منشأة.كما يسمح أيضا بأنواع مختلفة من التعيينات بسبب مطابقة ناعمة بين الجمل المستهدفة.نقوم بتقييم طريقةنا المقترحة على مهمتين (XNLI و Xquad) وتحقيق تحسينات على أساس الأساسيات وكذلك نتائج تنافسية مقارنة بأعمال مؤخرا مماثلة.
Recent studies have proposed different methods to improve multilingual word representations in contextualized settings including techniques that align between source and target embedding spaces. For contextualized embeddings, alignment becomes more complex as we additionally take context into consideration. In this work, we propose using Optimal Transport (OT) as an alignment objective during fine-tuning to further improve multilingual contextualized representations for downstream cross-lingual transfer. This approach does not require word-alignment pairs prior to fine-tuning that may lead to sub-optimal matching and instead learns the word alignments within context in an unsupervised manner. It also allows different types of mappings due to soft matching between source and target sentences. We benchmark our proposed method on two tasks (XNLI and XQuAD) and achieve improvements over baselines as well as competitive results compared to similar recent works.
References used
https://aclanthology.org/
We study a new problem of cross-lingual transfer learning for event coreference resolution (ECR) where models trained on data from a source language are adapted for evaluations in different target languages. We introduce the first baseline model for
Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) have proven impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages. This succe
Measuring the similarity score between a pair of sentences in different languages is the essential requisite for multilingual sentence embedding methods. Predicting the similarity score consists of two sub-tasks, which are monolingual similarity eval
We present machine learning classifiers to automatically identify COVID-19 misinformation on social media in three languages: English, Bulgarian, and Arabic. We compared 4 multitask learning models for this task and found that a model trained with En
Document alignment techniques based on multilingual sentence representations have recently shown state of the art results. However, these techniques rely on unsupervised distance measurement techniques, which cannot be fined-tuned to the task at hand