يمكن أن تسترجع إمكانات المطابقة الدلالية لاسترجاع المعلومات العصبية مشاكل المرادفات والبوليزيميمي من الأساليب الرمزية.ومع ذلك، فإن التمثيلات الكثيفة النماذج العصبية أكثر ملاءمة لإعادة الترتيب، بسبب عدم كفاءةها.تمثيلات متفرق، إما في شكل رمزي أو كامن، أكثر كفاءة مع مؤشر مقلوب.أخذ مزايا التمثيلات المتناثرة والكثيفة، نقترح مخطط تمثيل ثنائي الأبعاد للغاية (UHD) مجهز بمرضية يمكن السيطرة عليها مباشرة.سعة UHD الكبيرة والحد الأدنى من الضوضاء والتدخل بين الأبعاد تسمح بالتمثيل الثنائي، والتي تعتبر فعالة للغاية للتخزين والبحث.المقترح أيضا طريقة دلامية، حيث يتم اختيار / دمج الأشرطة من طبقات متعددة من بيرت / دمجها لتمثيل الجوانب اللغوية المتنوعة.نقوم باختبار نماذجنا باستخدام سيارة MS MARCO و TREC، والتي تبين أن نماذجنا تفوقت على نماذج غير متفرقة أخرى.
The semantic matching capabilities of neural information retrieval can ameliorate synonymy and polysemy problems of symbolic approaches. However, neural models' dense representations are more suitable for re-ranking, due to their inefficiency. Sparse representations, either in symbolic or latent form, are more efficient with an inverted index. Taking the merits of the sparse and dense representations, we propose an ultra-high dimensional (UHD) representation scheme equipped with directly controllable sparsity. UHD's large capacity and minimal noise and interference among the dimensions allow for binarized representations, which are highly efficient for storage and search. Also proposed is a bucketing method, where the embeddings from multiple layers of BERT are selected/merged to represent diverse linguistic aspects. We test our models with MS MARCO and TREC CAR, showing that our models outperforms other sparse models.
References used
https://aclanthology.org/
Abstract Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words
The embedding-based large-scale query-document retrieval problem is a hot topic in the information retrieval (IR) field. Considering that pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, we present a Q
One of the challenges in information retrieval (IR) is the vocabulary mismatch problem, which happens when the terms between queries and documents are lexically different but semantically similar. While recent work has proposed to expand the queries
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. T
The streaming service platform such as YouTube provides a discussion function for audiences worldwide to share comments. YouTubers who upload videos to the YouTube platform want to track the performance of these uploaded videos. However, the present