يتم تدريب نماذج استرجاع الحالية على نطاق واسع على نطاق واسع مع 0-1 تسمية صعبة تشير إلى ما إذا كان الاستعلام مناسب بمستند، وتجاهل المعلومات الغنية من درجة الأهمية.تقترح هذه الورقة تحسين الاسترجاع القائم على التضمين من منظور توصيف أفضل شهادة استفسار الوثيقة عن طريق إدخال تحسين التسمية (LE) لأول مرة.لتوليد توزيع الملصقات في سيناريو استرجاع، نقوم بتصميم طريقة رواية وفعالة تم إشرافها التي تتضمن المعرفة السابقة من أساليب الترجيح الديناميكي إلى تضمينات سياقية.تتفوقت طريقتنا بشكل كبير أربع نماذج استرجاع قابلة للتنافسية ونظرائها المجهزة بتقنيتين بديلين من نماذج التدريب مع توزيع الملصقات التي تم إنشاؤها كمعلومات الإشراف المساعدة.يمكن ملاحظة التفوق بسهولة على مهام استرجاع اللغة الإنجليزية والصينية على نطاق واسع تحت إعدادات بدء التشغيل القياسية والباردة.
Current embedding-based large-scale retrieval models are trained with 0-1 hard label that indicates whether a query is relevant to a document, ignoring rich information of the relevance degree. This paper proposes to improve embedding-based retrieval from the perspective of better characterizing the query-document relevance degree by introducing label enhancement (LE) for the first time. To generate label distribution in the retrieval scenario, we design a novel and effective supervised LE method that incorporates prior knowledge from dynamic term weighting methods into contextual embeddings. Our method significantly outperforms four competitive existing retrieval models and its counterparts equipped with two alternative LE techniques by training models with the generated label distribution as auxiliary supervision information. The superiority can be easily observed on English and Chinese large-scale retrieval tasks under both standard and cold-start settings.
References used
https://aclanthology.org/
The embedding-based large-scale query-document retrieval problem is a hot topic in the information retrieval (IR) field. Considering that pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, we present a Q
In this paper, we introduce a new English Twitter-based dataset for cyberbullying detection and online abuse. Comprising 62,587 tweets, this dataset was sourced from Twitter using specific query terms designed to retrieve tweets with high probabiliti
Large-Scale Multi-Label Text Classification (LMTC) includes tasks with hierarchical label spaces, such as automatic assignment of ICD-9 codes to discharge summaries. Performance of models in prior art is evaluated with standard precision, recall, and
We present the ongoing NorLM initiative to support the creation and use of very large contextualised language models for Norwegian (and in principle other Nordic languages), including a ready-to-use software environment, as well as an experience repo
This work demonstrates the development process of a machine learning architecture for inference that can scale to a large volume of requests. We used a BERT model that was fine-tuned for emotion analysis, returning a probability distribution of emoti