نحن نتعامل مع استجابة سؤال متعددة الاختيار.الحصول على معرفة المنطقية ذات الصلة بالسؤال والخيارات يسهل الاعتراف بالإجابة الصحيحة.ومع ذلك، تعاني نماذج التفكير الحالية من الضوضاء في المعرفة المستردة.في هذه الورقة، نقترح طريقة ترميز جديدة قادرة على إجراء الاعتراض والتصفية الناعمة.وهذا يساهم في حصاد وامتصاص المعلومات التمثيلية مع تدخل أقل من الضوضاء.نقوم بتجربة commonsenseqa.توضح النتائج التجريبية أن طريقتنا تعطي تحسينات كبيرة ومتسقة مقارنة بخدمات الأساس والقاعدة القائمة على روبرتا وألبرت.
We tackle multi-choice question answering. Acquiring related commonsense knowledge to the question and options facilitates the recognition of the correct answer. However, the current reasoning models suffer from the noises in the retrieved knowledge. In this paper, we propose a novel encoding method which is able to conduct interception and soft filtering. This contributes to the harvesting and absorption of representative information with less interference from noises. We experiment on CommonsenseQA. Experimental results illustrate that our method yields substantial and consistent improvements compared to the strong Bert, RoBERTa and Albert-based baselines.
References used
https://aclanthology.org/
Large pre-trained language models (PLMs) have led to great success on various commonsense question answering (QA) tasks in an end-to-end fashion. However, little attention has been paid to what commonsense knowledge is needed to deeply characterize t
Question Answering (QA) tasks requiring information from multiple documents often rely on a retrieval model to identify relevant information for reasoning. The retrieval model is typically trained to maximize the likelihood of the labeled supporting
Many datasets have been created for training reading comprehension models, and a natural question is whether we can combine them to build models that (1) perform better on all of the training datasets and (2) generalize and transfer better to new dat
Question answering (QA) models use retriever and reader systems to answer questions. Reliance on training data by QA systems can amplify or reflect inequity through their responses. Many QA models, such as those for the SQuAD dataset, are trained and
Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages. To be widely applicable, we focus on its zero-shot transfer setting. That is, we can only access train