ﻻ يوجد ملخص باللغة العربية
The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.
Question answering has emerged as an intuitive way of querying structured data sources, and has attracted significant advancements over the years. In this article, we provide an overview over these recent advancements, focusing on neural network base
Natural language understanding (NLU) of text is a fundamental challenge in AI, and it has received significant attention throughout the history of NLP research. This primary goal has been studied under different tasks, such as Question Answering (QA)
Knowledge-dependent tasks typically use two sources of knowledge: parametric, learned at training time, and contextual, given as a passage at inference time. To understand how models use these sources together, we formalize the problem of knowledge c
Knowledge graph question answering is an important technology in intelligent human-robot interaction, which aims at automatically giving answer to human natural language question with the given knowledge graph. For the multi-relation question with hi
Distributional compositional (DisCo) models are functors that compute the meaning of a sentence from the meaning of its words. We show that DisCo models in the category of sets and relations correspond precisely to relational databases. As a conseque