Do you want to publish a course? Click here

On Robustness of Neural Semantic Parsers

110   0   0.0 ( 0 )
 Added by Zhuang Li
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Semantic parsing maps natural language (NL) utterances into logical forms (LFs), which underpins many advanced NLP problems. Semantic parsers gain performance boosts with deep neural networks, but inherit vulnerabilities against adversarial examples. In this paper, we provide the empirical study on the robustness of semantic parsers in the presence of adversarial attacks. Formally, adversaries of semantic parsing are considered to be the perturbed utterance-LF pairs, whose utterances have exactly the same meanings as the original ones. A scalable methodology is proposed to construct robustness test sets based on existing benchmark corpora. Our results answered five research questions in measuring the sate-of-the-art parsers performance on robustness test sets, and evaluating the effect of data augmentation.



rate research

Read More

137 - Ziyu Yao , Yiqi Tang , Wen-tau Yih 2020
Despite the widely successful applications, bootstrapping and fine-tuning semantic parsers are still a tedious process with challenges such as costly data annotation and privacy risks. In this paper, we suggest an alternative, human-in-the-loop methodology for learning semantic parsers directly from users. A semantic parser should be introspective of its uncertainties and prompt for user demonstration when uncertain. In doing so it also gets to imitate the user behavior and continue improving itself autonomously with the hope that eventually it may become as good as the user in interpreting their questions. To combat the sparsity of demonstration, we propose a novel annotation-efficient imitation learning algorithm, which iteratively collects new datasets by mixing demonstrated states and confident predictions and re-trains the semantic parser in a Dataset Aggregation fashion (Ross et al., 2011). We provide a theoretical analysis of its cost bound and also empirically demonstrate its promising performance on the text-to-SQL problem. Code will be available at https://github.com/sunlab-osu/MISP.
This paper investigates continual learning for semantic parsing. In this setting, a neural semantic parser learns tasks sequentially without accessing full training data from previous tasks. Direct application of the SOTA continual learning algorithms to this problem fails to achieve comparable performance with re-training models with all seen tasks because they have not considered the special properties of structured outputs yielded by semantic parsers. Therefore, we propose TotalRecall, a continual learning method designed for neural semantic parsers from two aspects: i) a sampling method for memory replay that diversifies logical form templates and balances distributions of parse actions in a memory; ii) a two-stage training method that significantly improves generalization capability of the parsers across tasks. We conduct extensive experiments to study the research problems involved in continual semantic parsing and demonstrate that a neural semantic parser trained with TotalRecall achieves superior performance than the one trained directly with the SOTA continual learning algorithms and achieve a 3-6 times speedup compared to re-training from scratch. Code and datasets are available at: https://github.com/zhuang-li/cl_nsp.
We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. With a small amount of data and very little code to convert into English-like representations, we provide a blueprint for rapidly bootstrapping semantic parsers and demonstrate good performance on multiple tasks.
371 - Junjie Cao , Zi Lin , Weiwei Sun 2019
We present a phenomenon-oriented comparative analysis of the two dominant approaches in task-independent semantic parsing: classic, knowledge-intensive and neural, data-intensive models. To reflect state-of-the-art neural NLP technologies, we introduce a new target structure-centric parser that can produce semantic graphs much more accurately than previous data-driven parsers. We then show that, in spite of comparable performance overall, knowledge- and data-intensive models produce different types of errors, in a way that can be explained by their theoretical properties. This analysis leads to new directions for parser development.
The traditional dialogue state tracking (DST) task tracks the dialogue state given the past history of user and agent utterances. This paper proposes to replace the utterances before the current turn with a formal representation, which is used as the context in a semantic parser mapping the current user utterance to its formal meaning. In addition, we propose TOC (Task-Oriented Context), a formal dialogue state representation. This approach eliminates the need to parse a long history of natural language utterances; however, it adds complexity to the dialogue annotations. We propose Skim, a contextual semantic parser, trained with a sample-efficient training strategy: (1) a novel abstract dialogue state machine to synthesize training sets with TOC annotations; (2) data augmentation with automatic paraphrasing, (3) few-shot training, and (4) self-training. This paper also presents MultiWOZ 2.4, which consists of the full test set and a partial validation set of MultiWOZ 2.1, reannotated with the TOC representation. Skim achieves 78% turn-by-turn exact match accuracy and 85% slot accuracy, while our annotation effort amounts to only 2% of the training data used in MultiWOZ 2.1. The MultiWOZ 2.4 dataset will be released upon publication.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا