الجيل السردي هو مهمة NLP مفتوحة العضوية التي يولد فيها نموذج قصة إعطاء موجه.المهمة تشبه توليد الاستجابة العصبية لل Chatbots؛ومع ذلك، غالبا ما لا يتم تطبيق الابتكارات في توليد الاستجابة على جيل سرد، على الرغم من التشابه بين هذه المهام.نحن نهدف إلى سد هذه الفجوة من خلال تطبيق وتقييم التقدم في طرق فك تشفير جيل الاستجابة العصبية إلى توليد السرد العصبي.على وجه الخصوص، نحن نوظف GPT-2 وأداء الأزمة عبر عتبات أخذ العينات النواة ومثبتة تنوعا فرطيا مثبطا --- على وجه التحديد، والحد الأقصى للمعلومات المتبادلة - - تحليل النتائج على معايير متعددة مع التقييم التلقائي والإنساني.نجد أن (1) أخذ عينات نواة أفضل عموما مع عتبات بين 0.7 و 0.9؛(2) الحد الأقصى لهدف المعلومات المتبادلة يمكن أن يحسن نوعية القصص التي تم إنشاؤها؛و (3) لا ترتبط مقاييس التلقائية المنشأة بشكل جيد مع الأحكام الإنسانية لجودة السرد على أي متري نوعي.
Narrative generation is an open-ended NLP task in which a model generates a story given a prompt. The task is similar to neural response generation for chatbots; however, innovations in response generation are often not applied to narrative generation, despite the similarity between these tasks. We aim to bridge this gap by applying and evaluating advances in decoding methods for neural response generation to neural narrative generation. In particular, we employ GPT-2 and perform ablations across nucleus sampling thresholds and diverse decoding hyperparameters---specifically, maximum mutual information---analyzing results over multiple criteria with automatic and human evaluation. We find that (1) nucleus sampling is generally best with thresholds between 0.7 and 0.9; (2) a maximum mutual information objective can improve the quality of generated stories; and (3) established automatic metrics do not correlate well with human judgments of narrative quality on any qualitative metric.
References used
https://aclanthology.org/
Conditional text generation often requires lexical constraints, i.e., which words should or shouldn't be included in the output text. While the dominant recipe for conditional text generation has been large-scale pretrained language models that are f
Understanding speaker's feelings and producing appropriate responses with emotion connection is a key communicative skill for empathetic dialogue systems. In this paper, we propose a simple technique called Affective Decoding for empathetic response
Most current neural machine translation models adopt a monotonic decoding order of either left-to-right or right-to-left. In this work, we propose a novel method that breaks up the limitation of these decoding orders, called Smart-Start decoding. Mor
Over the past decade, the field of natural language processing has developed a wide array of computational methods for reasoning about narrative, including summarization, commonsense inference, and event detection. While this work has brought an impo
The shift to neural models in Referring Expression Generation (REG) has enabled more natural set-ups, but at the cost of interpretability. We argue that integrating pragmatic reasoning into the inference of context-agnostic generation models could re