أحدثت النماذج المدربة (E2E) مؤخرا (E2E) لصالح الإجابة على الرسوم البيانية المعرفة (KGQA) نتائج واعدة تستخدم فقط مجموعة بيانات خاضعة للإشراف.ومع ذلك، يتم تدريب هذه النماذج وتقييمها في وضع يتم فيه توفير كيانات سؤال مشروح يدوية للنموذج، مما يترك المهمة المهمة وغير التافهة لقرار الكيان (ER) خارج نطاق تعلم E2E.في هذا العمل، نقوم بتوسيع حدود التعلم E2E ل KGQA لتضمين تدريب مكون ER.يحتاج النموذج الخاص بنا فقط إلى نص الأسئلة والكيانات الإجابة لتدريب، وتوفر نموذج ضمان الجودة المستقل لا يتطلب توفير مكون إضافي ER أثناء وقت التشغيل.نهجنا هو قابل له تماما، وذلك بفضل اعتماده على طريقة حديثة لبناء KGS الفائقة (كوهين وآخرون، 2020).نقوم بتقييم نموذج E2E المدربين على مجموعة بيانات عامين وإظهار أنه يقترب من النماذج الأساسية التي تستخدم الكيانات المشروح اليدوية.
Recently, end-to-end (E2E) trained models for question answering over knowledge graphs (KGQA) have delivered promising results using only a weakly supervised dataset. However, these models are trained and evaluated in a setting where hand-annotated question entities are supplied to the model, leaving the important and non-trivial task of entity resolution (ER) outside the scope of E2E learning. In this work, we extend the boundaries of E2E learning for KGQA to include the training of an ER component. Our model only needs the question text and the answer entities to train, and delivers a stand-alone QA model that does not require an additional ER component to be supplied during runtime. Our approach is fully differentiable, thanks to its reliance on a recent method for building differentiable KGs (Cohen et al., 2020). We evaluate our E2E trained model on two public datasets and show that it comes close to baseline models that use hand-annotated entities.
References used
https://aclanthology.org/
A conventional approach to improving the performance of end-to-end speech translation (E2E-ST) models is to leverage the source transcription via pre-training and joint training with automatic speech recognition (ASR) and neural machine translation (
Knowledge graph entity typing aims to infer entities' missing types in knowledge graphs which is an important but under-explored issue. This paper proposes a novel method for this task by utilizing entities' contextual information. Specifically, we d
Most previous studies on information status (IS) classification and bridging anaphora recognition assume that the gold mention or syntactic tree information is given (Hou et al., 2013; Roesiger et al., 2018; Hou, 2020; Yu and Poesio, 2020). In this p
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large
This tutorial surveys the latest technical progress of syntactic parsing and the role of syntax in end-to-end natural language processing (NLP) tasks, in which semantic role labeling (SRL) and machine translation (MT) are the representative NLP tasks