ﻻ يوجد ملخص باللغة العربية
In a dialogue system pipeline, a natural language generation (NLG) unit converts the dialogue direction and content to a corresponding natural language realization. A recent trend for dialogue systems is to first pre-train on large datasets and then fine-tune in a supervised manner using datasets annotated with application-specific features. Though novel behaviours can be learned from custom annotation, the required effort severely bounds the quantity of the training set, and the application-specific nature limits the reuse. In light of the recent success of data-driven approaches, we propose the novel future bridging NLG (FBNLG) concept for dialogue systems and simulators. The critical step is for an FBNLG to accept a future user or system utterance to bridge the present context towards. Future bridging enables self supervised training over annotation-free datasets, decoupled the training of NLG from the rest of the system. An FBNLG, pre-trained with massive datasets, is expected to apply in classical or new dialogue scenarios with minimal adaptation effort. We evaluate a prototype FBNLG to show that future bridging can be a viable approach to a universal few-shot NLG for task-oriented and chit-chat dialogues.
One of the difficulties in training dialogue systems is the lack of training data. We explore the possibility of creating dialogue data through the interaction between a dialogue system and a user simulator. Our goal is to develop a modelling framewo
Dialogue policy plays an important role in task-oriented spoken dialogue systems. It determines how to respond to users. The recently proposed deep reinforcement learning (DRL) approaches have been used for policy optimization. However, these deep mo
End-to-end neural networks have achieved promising performances in natural language generation (NLG). However, they are treated as black boxes and lack interpretability. To address this problem, we propose a novel framework, heterogeneous rendering m
Machine learning approaches for building task-oriented dialogue systems require large conversational datasets with labels to train on. We are interested in building task-oriented dialogue systems from human-human conversations, which may be available
Generating fluent natural language responses from structured semantic representations is a critical step in task-oriented conversational systems. Avenues like the E2E NLG Challenge have encouraged the development of neural approaches, particularly se