تتطلب QA مؤخرا مع أسئلة التفكير المنطقي علاقات على مستوى المرور بين الجمل.ومع ذلك، فإن النهج الحالية لا تزال تركز على العلاقات على مستوى الجملة تتفاعل بين الرموز.في هذا العمل، نستكشف عن أدلة على مستوى المرور التجميعي لحل ضمنيا المنطق المنطقي باستخدام المعلومات المستندة إلى الخطاب.نقترح شبكة الرسم البياني على داجرا (DAGN) تلك الأسباب التي تعتمد على هيكل الخطاب للنصوص.يرميز النموذج معلومات الخطاب كشركة رسم بياني مع وحدات الخطاب الأولية (EDUS) وعلاقات الخطاب، وتعلم ميزات Converse-Aware عبر شبكة رسم بياني لمهام QA المصب.يتم إجراء التجارب على اثنين من مجموعات البيانات المنطقية من المنطقية، reclor and logiqa، ونتائج dagn المقترحة لدينا نتائج تنافسية.يتوفر شفرة المصدر في https://github.com/eleanor-h/dagn.
Recent QA with logical reasoning questions requires passage-level relations among the sentences. However, current approaches still focus on sentence-level relations interacting among tokens. In this work, we explore aggregating passage-level clues for solving logical reasoning QA by using discourse-based information. We propose a discourse-aware graph network (DAGN) that reasons relying on the discourse structure of the texts. The model encodes discourse information as a graph with elementary discourse units (EDUs) and discourse relations, and learns the discourse-aware features via a graph network for downstream QA tasks. Experiments are conducted on two logical reasoning QA datasets, ReClor and LogiQA, and our proposed DAGN achieves competitive results. The source code is available at https://github.com/Eleanor-H/DAGN.
References used
https://aclanthology.org/
Many state-of-art neural models designed for monotonicity reasoning perform poorly on downward inference. To address this shortcoming, we developed an attentive tree-structured neural network. It consists of a tree-based long-short-term-memory networ
Extractive text summarization aims at extracting the most representative sentences from a given document as its summary. To extract a good summary from a long text document, sentence embedding plays an important role. Recent studies have leveraged gr
Emotion recognition in multi-party conversation (ERMC) is becoming increasingly popular as an emerging research topic in natural language processing. Prior research focuses on exploring sequential information but ignores the discourse structures of c
With the early success of query-answer assistants such as Alexa and Siri, research attempts to expand system capabilities of handling service automation are now abundant. However, preliminary systems have quickly found the inadequacy in relying on si
Mathematical reasoning aims to infer satisfiable solutions based on the given mathematics questions. Previous natural language processing researches have proven the effectiveness of sequence-to-sequence (Seq2Seq) or related variants on mathematics so