تميل نماذج التعليم العميق لمهام توليد اللغة إلى إنتاج إخراج متكرر.تم اقتراح طرق مختلفة لتشجيع التنوع المعجمي أثناء فك التشفير، ولكن هذا غالبا ما يأتي بتكلفة إلى الطلاقة المتصورة وكفاية الإنتاج.في هذا العمل، نقترح قم بتحسين هذه التكلفة باستخدام نهج تعليمي تقليد لاستكشاف مستوى التنوع الذي يمكن أن ينتج عنه نموذج توليد اللغة بشكل موثوق.على وجه التحديد، نزيد عملية فك التشفير مع تصنيف META مدربين على التمييز بين الكلمات الموجودة في أي وقت معينة ستؤدي إلى إخراج عالية الجودة.نحن نركز تجاربنا على جيل المفاهيم إلى النص حيث تكون النماذج حساسة لإدراج الكلمات غير ذات الصلة بسبب العلاقة الصارمة بين المدخلات والإخراج.يوضح تحليلنا أن الأساليب السابقة للتنوع غير الأدبي في هذا الإعداد، في حين أن التقييم البشري يشير إلى أن طريقةنا المقترحة تحقق مستوى عال من التنوع مع الحد الأدنى من التأثير على طلاقة الإخراج والفوضي.
Deep-learning models for language generation tasks tend to produce repetitive output. Various methods have been proposed to encourage lexical diversity during decoding, but this often comes at a cost to the perceived fluency and adequacy of the output. In this work, we propose to ameliorate this cost by using an Imitation Learning approach to explore the level of diversity that a language generation model can reliably produce. Specifically, we augment the decoding process with a meta-classifier trained to distinguish which words at any given timestep will lead to high-quality output. We focus our experiments on concept-to-text generation where models are sensitive to the inclusion of irrelevant words due to the strict relation between input and output. Our analysis shows that previous methods for diversity underperform in this setting, while human evaluation suggests that our proposed method achieves a high level of diversity with minimal effect on the output's fluency and adequacy.
References used
https://aclanthology.org/
In image captioning, multiple captions are often provided as ground truths, since a valid caption is not always uniquely determined. Conventional methods randomly select a single caption and treat it as correct, but there have been few effective trai
Variational autoencoders have been studied as a promising approach to model one-to-many mappings from context to response in chat response generation. However, they often fail to learn proper mappings. One of the reasons for this failure is the discr
Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages. Most of these models feature an important corpus sampling step in the process of accumulating training data in different languages, to en
In most of neural machine translation distillation or stealing scenarios, the highest-scoring hypothesis of the target model (teacher) is used to train a new model (student). If reference translations are also available, then better hypotheses (with
The recent Text-to-Text Transfer Transformer'' (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 th