إن التحدي الرئيسي في السؤال الرد على قواعد المعرفة (KBQA) هو التناقض بين أسئلة اللغة الطبيعية ومسارات المنطق في قاعدة المعرفة (KB). أساليب KBQA القائمة على الرسم البياني في الرسم البياني هي جيدة في استيعاب هيكل الطوبولوجي للرساه الرسم ولكن غالبا ما تجاهل المعلومات النصية التي تحملها العقد والحواف. وفي الوقت نفسه، تتعلم نماذج اللغة المدربة مسبقا معرفة ضخمة مفتوحة عالمية من الكائنات الكبيرة، ولكنها في شكل اللغة الطبيعية وليس منظم. لسد الفجوة بين اللغة الطبيعية و KB الهيكلية، نقترح ثلاث مهام تعلم العلاقة ل KBQA القائم على BERT، بما في ذلك استخراج العلاقة ومطابقة العلاقات والمعاقين. عن طريق التدريب المعزز في العلاقة، يتعلم النموذج مواءمة تعبيرات اللغات الطبيعية للعلاقات في KB وكذلك السبب في الروابط المفقودة في KB. تظهر التجارب على WebQSP أن طريقتنا تتفوق باستمرار على خطوط الأساس الأخرى، خاصة عندما تكون KB غير مكتملة.
The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the reasoning paths in the knowledge base (KB). Recent graph-based KBQA methods are good at grasping the topological structure of the graph but often ignore the textual information carried by the nodes and edges. Meanwhile, pre-trained language models learn massive open-world knowledge from the large corpus, but it is in the natural language form and not structured. To bridge the gap between the natural language and the structured KB, we propose three relation learning tasks for BERT-based KBQA, including relation extraction, relation matching, and relation reasoning. By relation-augmented training, the model learns to align the natural language expressions to the relations in the KB as well as reason over the missing connections in the KB. Experiments on WebQSP show that our method consistently outperforms other baselines, especially when the KB is incomplete.
References used
https://aclanthology.org/
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words i
Large-scale language models such as GPT-3 are excellent few-shot learners, allowing them to be controlled via natural text prompts. Recent studies report that prompt-based direct classification eliminates the need for fine-tuning but lacks data and i
Pre-trained language models have achieved huge success on a wide range of NLP tasks. However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive use
Emotion is fundamental to humanity. The ability to perceive, understand and respond to social interactions in a human-like manner is one of the most desired capabilities in artificial agents, particularly in social-media bots. Over the past few years