مهارات التفكير العددي ضرورية للإجابة على الأسئلة المعقدة (CQA) على النص.يتطلب opertaions بما في ذلك العد والمقارنة والإضافة والطرح.يتبع نهج ناجح في CQA على النص، وشبكات الوحدات النمطية العصبية (NMNS)، تتبع نموذج المبرمج ومترجم البرامج النمطية النمطية المتخصصة لأداء التفكير التركيبي.ومع ذلك، فإن إطار NMNS لا ينظر في العلاقة بين الأرقام والكيانات في كل من الأسئلة والفقرات.نقترح تقنيات فعالة لتحسين قدرات التفكير العددي NMNS من خلال إدراك السؤال المترجم والتقاط العلاقة بين الكيانات والأرقام.على نفس المجموعة الفرعية من DataSet Drop for CQA على النص، تظهر النتائج التجريبية أن إضافاتنا تتفوق على NMNS الأصلي بنسبة 3.0 نقاط للحصول على درجة F1 الإجمالية.
Numerical reasoning skills are essential for complex question answering (CQA) over text. It requires opertaions including counting, comparison, addition and subtraction. A successful approach to CQA on text, Neural Module Networks (NMNs), follows the programmer-interpreter paradigm and leverages specialised modules to perform compositional reasoning. However, the NMNs framework does not consider the relationship between numbers and entities in both questions and paragraphs. We propose effective techniques to improve NMNs' numerical reasoning capabilities by making the interpreter question-aware and capturing the relationship between entities and numbers. On the same subset of the DROP dataset for CQA on text, experimental results show that our additions outperform the original NMNs by 3.0 points for the overall F1 score.
References used
https://aclanthology.org/
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs). This paper targets at empowering the IR-based KBQA models with the ability of numerical reasoning for answering ordinal constrained que
This work presents a novel four-stage open-domain QA pipeline R2-D2 (Rank twice, reaD twice). The pipeline is composed of a retriever, passage reranker, extractive reader, generative reader and a mechanism that aggregates the final prediction from al
While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects. First, we lack QA datasets covering complex ques
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large
Most of the existing Knowledge-based Question Answering (KBQA) methods first learn to map the given question to a query graph, and then convert the graph to an executable query to find the answer. The query graph is typically expanded progressively f