لقد أظهر استرجاع كثيف نجاحا كبيرا لمرتبة المرور باللغة الإنجليزية.ومع ذلك، لا تزال فعاليتها للغات غير الإنجليزية غير مستكشفة بسبب الحد من الموارد التدريبية.في هذا العمل، نستكشف تقنيات نقل مختلفة لتحقيق تصنيف الوثيقة من التوضيح باللغة الإنجليزية إلى اللغات غير الإنجليزية.تكشف تجاربنا أن التحويل المستندة إلى نموذج الطلقة الصفرية باستخدام mbert يحسن جودة البحث.نجد أن التحويل اللغوي المستهدف الأكثر إشرافا ضعيفا قادرة على المنافسة مقارنة بنقل اللغة المستهدفة القائمة على الجيل، والذي يتطلب نماذج الترجمة.
Dense retrieval has shown great success for passage ranking in English. However, its effectiveness for non-English languages remains unexplored due to limitation in training resources. In this work, we explore different transfer techniques for document ranking from English annotations to non-English languages. Our experiments reveal that zero-shot model-based transfer using mBERT improves search quality. We find that weakly-supervised target language transfer is competitive compared to generation-based target language transfer, which requires translation models.
References used
https://aclanthology.org/
Open-domain question answering has exploded in popularity recently due to the success of dense retrieval models, which have surpassed sparse models using only a few supervised training examples. However, in this paper, we demonstrate current dense mo
Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text compar
In various natural language processing tasks, passage retrieval and passage re-ranking are two key procedures in finding and ranking relevant information. Since both the two procedures contribute to the final performance, it is important to jointly o
We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations. The goal of this resource is to spur research in dense retri
Recent pretrained vision-language models have achieved impressive performance on cross-modal retrieval tasks in English. Their success, however, heavily depends on the availability of many annotated image-caption datasets for pretraining, where the t