تهدف مهمة إعادة كتابة الحوار إلى إعادة بناء أحدث كلام الحوار عن طريق نسخ المحتوى المفقود من سياق الحوار.حتى الآن، تعاني النماذج الحالية لهذه المهمة من مشكلة المتانة، أي أن يؤدي العروض بشكل كبير عند الاختبار على مجموعة مختلفة.نحن نتطلع إلى هذه القضية المتناقصة من خلال اقتراح نموذج يستند إلى تسلسل الرواية بحيث يتم تقليل مساحة البحث بشكل كبير، ومع ذلك، فإن جوهر هذه المهمة لا يزال مغطى جيدا.كمسألة مشتركة من طرازات العلامات في توليد النص، قد تفتقر مخرجات النموذج إلى الطلاقة.لتخفيف هذه المسألة، نفنق إشارة الخسارة من بلو أو GPT-2 بموجب إطار تعزيز.تظهر التجارب تحسينات هائلة في النموذج لدينا عبر الأنظمة الحالية التي من بين الفنون عند النقل إلى مجموعة بيانات أخرى.
The task of dialogue rewriting aims to reconstruct the latest dialogue utterance by copying the missing content from the dialogue context. Until now, the existing models for this task suffer from the robustness issue, i.e., performances drop dramatically when testing on a different dataset. We address this robustness issue by proposing a novel sequence-tagging-based model so that the search space is significantly reduced, yet the core of this task is still well covered. As a common issue of most tagging models for text generation, the model's outputs may lack fluency. To alleviate this issue, we inject the loss signal from BLEU or GPT-2 under a REINFORCE framework. Experiments show huge improvements of our model over the current state-of-the-art systems when transferring to another dataset.
References used
https://aclanthology.org/
We introduce a new dataset for Question Rewriting in Conversational Context (QReCC), which contains 14K conversations with 80K question-answer pairs. The task in QReCC is to find answers to conversational questions within a collection of 10M web page
This paper describes a system proposed for the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (EUD). We propose a Graph Rewriting based system for computing Enhanced Universal Dependencies, given the Basic Universal Dependencies (UD).
Incorporating external knowledge sources effectively in conversations is a longstanding problem in open-domain dialogue research. The existing literature on open-domain knowledge selection is limited and makes certain brittle assumptions on knowledge
Deriving and modifying graphs from natural language text has become a versatile basis technology for information extraction with applications in many subfields, such as semantic parsing or knowledge graph construction. A recent work used this techniq
Sequence-to-sequence models have been applied to a wide variety of NLP tasks, but how to properly use them for dialogue state tracking has not been systematically investigated. In this paper, we study this problem from the perspectives of pre-trainin