ﻻ يوجد ملخص باللغة العربية
In spite of great advancements of machine reading comprehension (RC), existing RC models are still vulnerable and not robust to different types of adversarial examples. Neural models over-confidently predict wrong answers to semantic different adversarial examples, while over-sensitively predict wrong answers to semantic equivalent adversarial examples. Existing methods which improve the robustness of such neural models merely mitigate one of the two issues but ignore the other. In this paper, we address the over-confidence issue and the over-sensitivity issue existing in current RC models simultaneously with the help of external linguistic knowledge. We first incorporate external knowledge to impose different linguistic constraints (entity constraint, lexical constraint, and predicate constraint), and then regularize RC models through posterior regularization. Linguistic constraints induce more reasonable predictions for both semantic different and semantic equivalent adversarial examples, and posterior regularization provides an effective mechanism to incorporate these constraints. Our method can be applied to any existing neural RC models including state-of-the-art BERT models. Extensive experiments show that our method remarkably improves the robustness of base RC models, and is better to cope with these two issues simultaneously.
Reading comprehension is an important ability of human intelligence. Literacy and numeracy are two most essential foundation for people to succeed at study, at work and in life. Reading comprehension ability is a core component of literacy. In most o
Extractive Reading Comprehension (ERC) has made tremendous advances enabled by the availability of large-scale high-quality ERC training data. Despite of such rapid progress and widespread application, the datasets in languages other than high-resour
This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called M
Current reading comprehension models generalise well to in-distribution test sets, yet perform poorly on adversarially selected inputs. Most prior work on adversarial inputs studies oversensitivity: semantically invariant text perturbations that caus
Reading comprehension (RC)---in contrast to information retrieval---requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in