تهدف توليد الصياغة الموجهة إلى Exemplar (EGPG) إلى توليد جملة مستهدفة تتوافق مع أسلوب Exemplar المحدد أثناء توسيع نطاق معلومات المحتوى من الجملة المصدر. في هذه الورقة، نقترح طريقة جديدة بهدف تعلم تمثيل أفضل للنمط والمحتوى. تحفز هذه الطريقة بشكل أساسي النجاح الأخير للتعلم المقاوم للتناقض التي أثبتت قوتها في مهام استخراج ميزة غير مدفوعة. تتمثل الفكرة في تصميم خسائرتين متباينتين فيما يتعلق بالمحتوى والأسلوب من خلال النظر في خصائص المشكلة أثناء التدريب. إحدى الممتلكات هي أن الجملة المستهدفة تشترك في نفس المحتوى مع جملة المصدر، والخصائص الثانية هي أن الجملة المستهدفة تشارك نفس النمط مع Exemplar. يتم دمج هذين الخسائرتين للتناقض في نموذج فك التشفير العام. تثبت التجارب على مجموعة بيانات اثنين، وهي QQP-Pos و Paranmt، فعالية خسائرنا القاطعة المقترحة.
Exemplar-Guided Paraphrase Generation (EGPG) aims to generate a target sentence which conforms to the style of the given exemplar while encapsulating the content information of the source sentence. In this paper, we propose a new method with the goal of learning a better representation of the style and the content. This method is mainly motivated by the recent success of contrastive learning which has demonstrated its power in unsupervised feature extraction tasks. The idea is to design two contrastive losses with respect to the content and the style by considering two problem characteristics during training. One characteristic is that the target sentence shares the same content with the source sentence, and the second characteristic is that the target sentence shares the same style with the exemplar. These two contrastive losses are incorporated into the general encoder-decoder paradigm. Experiments on two datasets, namely QQP-Pos and ParaNMT, demonstrate the effectiveness of our proposed constrastive losses.
References used
https://aclanthology.org/
Large-scale auto-regressive models have achieved great success in dialogue response generation, with the help of Transformer layers. However, these models do not learn a representative latent space of the sentence distribution, making it hard to cont
Context-aware neural machine translation (NMT) incorporates contextual information of surrounding texts, that can improve the translation quality of document-level machine translation. Many existing works on context-aware NMT have focused on developi
Paraphrase identification (PI), a fundamental task in natural language processing, is to identify whether two sentences express the same or similar meaning, which is a binary classification problem. Recently, BERT-like pre-trained language models hav
Paraphrase generation is an important task in natural language processing. Previous works focus on sentence-level paraphrase generation, while ignoring document-level paraphrase generation, which is a more challenging and valuable task. In this paper
The lack of labeled training data for new features is a common problem in rapidly changing real-world dialog systems. As a solution, we propose a multilingual paraphrase generation model that can be used to generate novel utterances for a target feat