يتم تشفير المعرفة البشرية بشكل جماعي في حوالي 6500 لغة تحدثت في جميع أنحاء العالم، لكنها لا توزع بنفس القدر من اللغات.وبالتالي، بالنسبة لنظم الإجابة على الأسئلة التي تسعى للحصول على المعلومات (QA) لخدمة مكبرات الصوت بشكل كاف من جميع اللغات، فإنها تحتاج إلى تشغيل عبر المقلوب.في هذا العمل، نحقق في إمكانات نماذج اللغة المسبقة للحداجات متعددة اللغات على QA عبر اللغات.نجد أن محاذاة التمثيلات الصافية عبر اللغات التي تحتوي على خطوة Finetuning Post-Hoc تؤدي عموما إلى تحسين الأداء.نحن بالإضافة إلى ذلك التحقيق في تأثير حجم البيانات بالإضافة إلى اختيار اللغة في هذه الخطوة الدقيقة هذه، أيضا إطلاق مجموعة بيانات لتقييم أنظمة QA عبر اللغات.
Human knowledge is collectively encoded in the roughly 6500 languages spoken around the world, but it is not distributed equally across languages. Hence, for information-seeking question answering (QA) systems to adequately serve speakers of all languages, they need to operate cross-lingually. In this work we investigate the capabilities of multilingually pretrained language models on cross-lingual QA. We find that explicitly aligning the representations across languages with a post-hoc finetuning step generally leads to improved performance. We additionally investigate the effect of data size as well as the language choice in this fine-tuning step, also releasing a dataset for evaluating cross-lingual QA systems.
References used
https://aclanthology.org/
Recent pretrained vision-language models have achieved impressive performance on cross-modal retrieval tasks in English. Their success, however, heavily depends on the availability of many annotated image-caption datasets for pretraining, where the t
Multilingual question answering tasks typically assume that answers exist in the same language as the question. Yet in practice, many languages face both information scarcity---where languages have few reference articles---and information asymmetry--
Spoken question answering (SQA) requires fine-grained understanding of both spoken documents and questions for the optimal answer prediction. In this paper, we propose novel training schemes for spoken question answering with a self-supervised traini
Current textual question answering (QA) models achieve strong performance on in-domain test sets, but often do so by fitting surface-level patterns, so they fail to generalize to out-of-distribution settings. To make a more robust and understandable
Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is genera