الإجابة على الأسئلة الأساسية للمعرفة (KBQA) هي الإجابة على أسئلة اللغة الطبيعية المطروحة على قواعد المعرفة (KBS).هذه الأهداف الورقية في تمكين نماذج KBQA القائمة على IR مع قدرة المنطق العددي للإجابة على أسئلة مقيدة ترتيبية.التحدي الرئيسي هو عدم وجود شروح واضحة حول الخصائص العددية.لمعالجة هذا التحدي، نقترح نموذجا للتفكير العددي الذي يتألف من Numgnn و Numtransformer، يسترشد بإشارات مراقبة ذاتية صريحة.يتم الاحترام من الوحداتتين لتشميز الحجم والخصائص الترتيبية للأرقام على التوالي ويمكن أن تكون بمثابة إضافات نموذجية للأذرع لأي نموذج KBQA المستندة إلى IR لتعزيز قدرة التفكير العددي.تجارب واسعة على معايير KBQA تحقق من فعالية طريقتنا لتعزيز قدرة التفكير العددي لنماذج KBQA القائمة على IR.
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs). This paper targets at empowering the IR-based KBQA models with the ability of numerical reasoning for answering ordinal constrained questions. A major challenge is the lack of explicit annotations about numerical properties. To address this challenge, we propose a pretraining numerical reasoning model consisting of NumGNN and NumTransformer, guided by explicit self-supervision signals. The two modules are pretrained to encode the magnitude and ordinal properties of numbers respectively and can serve as model-agnostic plugins for any IR-based KBQA model to enhance its numerical reasoning ability. Extensive experiments on two KBQA benchmarks verify the effectiveness of our method to enhance the numerical reasoning ability for IR-based KBQA models.
References used
https://aclanthology.org/
Numerical reasoning skills are essential for complex question answering (CQA) over text. It requires opertaions including counting, comparison, addition and subtraction. A successful approach to CQA on text, Neural Module Networks (NMNs), follows the
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large
While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects. First, we lack QA datasets covering complex ques
Most of the existing Knowledge-based Question Answering (KBQA) methods first learn to map the given question to a query graph, and then convert the graph to an executable query to find the answer. The query graph is typically expanded progressively f
Relation detection in knowledge base question answering, aims to identify the path(s) of relations starting from the topic entity node that is linked to the answer node in knowledge graph. Such path might consist of multiple relations, which we call