مشكلة استرجاع المستندات المستندة إلى المستندات المستندة إلى تضمينها هي موضوع ساخن في مجال استرجاع المعلومات (IR).بالنظر إلى أن نماذج اللغة المدربة مسبقا مثل بيرت حققت نجاحا كبيرا في مجموعة واسعة من مهام NLP، فإننا نقدم نموذجا رباعية لاسترجاع فعال وفعال في هذه الورقة.على عكس معظم طرازات استرجاع أسلوب بيرت الموجود، والتي تركز فقط على مرحلة الترتيب في أنظمة الاسترجاع، فإن نموذجنا يجعل تحسينات كبيرة في مرحلة الاسترجاع وتزود المسافات بين الحالات السلبية السلبية والسلبية البسيطة للحصول على تضمين أفضل.توضح النتائج التجريبية أن لدينا QuadrouPletbert تحقق نتائج أحدث النتائج في مهام الاسترجاع على نطاق واسع القائم.
The embedding-based large-scale query-document retrieval problem is a hot topic in the information retrieval (IR) field. Considering that pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, we present a QuadrupletBERT model for effective and efficient retrieval in this paper. Unlike most existing BERT-style retrieval models, which only focus on the ranking phase in retrieval systems, our model makes considerable improvements to the retrieval phase and leverages the distances between simple negative and hard negative instances to obtaining better embeddings. Experimental results demonstrate that our QuadrupletBERT achieves state-of-the-art results in embedding-based large-scale retrieval tasks.
References used
https://aclanthology.org/
Current embedding-based large-scale retrieval models are trained with 0-1 hard label that indicates whether a query is relevant to a document, ignoring rich information of the relevance degree. This paper proposes to improve embedding-based retrieval
This work demonstrates the development process of a machine learning architecture for inference that can scale to a large volume of requests. We used a BERT model that was fine-tuned for emotion analysis, returning a probability distribution of emoti
We present the ongoing NorLM initiative to support the creation and use of very large contextualised language models for Norwegian (and in principle other Nordic languages), including a ready-to-use software environment, as well as an experience repo
This paper illustrates our approach to the shared task on large-scale multilingual machine translation in the sixth conference on machine translation (WMT-21). In this work, we aim to build a single multilingual translation system with a hypothesis t
The semantic matching capabilities of neural information retrieval can ameliorate synonymy and polysemy problems of symbolic approaches. However, neural models' dense representations are more suitable for re-ranking, due to their inefficiency. Sparse