نقدم نهج تدريب فعال لاسترجاع النص مع تمثيلات كثيفة تنطبق على تقطير المعرفة باستخدام نموذج تصنيف Colbert المتأخر للتفاعل.على وجه التحديد، نقترح نقل المعرفة من مدرس ثنائي التشفير إلى طالب عن طريق تقطير المعرفة من مشغل كولبير في Maxsim المعبير في منتج نقطة بسيطة.ميزة المعلم ثنائي التشفير - إعداد الطالب هو أنه يمكننا إضافة سلبيات داخل الدفعة الكفاءة أثناء تقطير المعرفة، مما يتيح التفاعلات الأكثر ثراء بين نماذج المعلم والطلاب.بالإضافة إلى ذلك، باستخدام Colbert حيث يقلل المعلم من تكلفة التدريب مقارنة بتشييح عرض كامل.تجارب على ممر MS MARCO ومهام وصف الوثيقة وبياناتها من مسار التعلم العميق TREC 2019 أن نهجنا يساعد النماذج على تعلم تمثيلات قوية لاسترجاع كثيف بفعالية وكفاءة.
We present an efficient training approach to text retrieval with dense representations that applies knowledge distillation using the ColBERT late-interaction ranking model. Specifically, we propose to transfer the knowledge from a bi-encoder teacher to a student by distilling knowledge from ColBERT's expressive MaxSim operator into a simple dot product. The advantage of the bi-encoder teacher--student setup is that we can efficiently add in-batch negatives during knowledge distillation, enabling richer interactions between teacher and student models. In addition, using ColBERT as the teacher reduces training cost compared to a full cross-encoder. Experiments on the MS MARCO passage and document ranking tasks and data from the TREC 2019 Deep Learning Track demonstrate that our approach helps models learn robust representations for dense retrieval effectively and efficiently.
References used
https://aclanthology.org/
Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text compar
We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations. The goal of this resource is to spur research in dense retri
Knowledge Distillation (KD) is extensively used in Natural Language Processing to compress the pre-training and task-specific fine-tuning phases of large neural language models. A student model is trained to minimize a convex combination of the predi
Dense neural text retrieval has achieved promising results on open-domain Question Answering (QA), where latent representations of questions and passages are exploited for maximum inner product search in the retrieval process. However, current dense
Complex question answering often requires finding a reasoning chain that consists of multiple evidence pieces. Current approaches incorporate the strengths of structured knowledge and unstructured text, assuming text corpora is semi-structured. Build