غالبا ما يتطلب الإجابة على السؤال المجمع إيجاد سلسلة من التفكير يتكون من قطع أدلة متعددة.تتضمن الأساليب الحالية نقاط قوة المعرفة والنص غير منظم، بافتراض النص النحاسي نصف منظم.بناء على طرق استرجاع كثيفة، نقترح نهجا جديدا استرجاع متعدد الخطوات (BEAMDR) يشكل بشكل متكرر سلسلة دليل من خلال البحث في شعاع في تمثيلات كثيفة.عند تقييمها على الإجابة على السؤال المتعدد القفز، فإن Beamdr منافسة النظم الحديثة، دون استخدام أي معلومات شبه منظمة.من خلال تكوين الاستعلام في الفضاء الكثيف، يلتقط Beamdr العلاقات الضمنية بين الأدلة في سلسلة المنطق.الرمز متاح في HENREZHAO5852 / BEAMDR.
Complex question answering often requires finding a reasoning chain that consists of multiple evidence pieces. Current approaches incorporate the strengths of structured knowledge and unstructured text, assuming text corpora is semi-structured. Building on dense retrieval methods, we propose a new multi-step retrieval approach (BeamDR) that iteratively forms an evidence chain through beam search in dense representations. When evaluated on multi-hop question answering, BeamDR is competitive to state-of-the-art systems, without using any semi-structured information. Through query composition in dense space, BeamDR captures the implicit relationships between evidence in the reasoning chain. The code is available at https://github.com/ henryzhao5852/BeamDR.
References used
https://aclanthology.org/
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a
We investigate the representations learned by vision and language models in tasks that require relational reasoning. Focusing on the problem of assessing the relative size of objects in abstract visual contexts, we analyse both one-step and two-step
We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations. The goal of this resource is to spur research in dense retri
Dense neural text retrieval has achieved promising results on open-domain Question Answering (QA), where latent representations of questions and passages are exploited for maximum inner product search in the retrieval process. However, current dense
Abstract Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words