نحن تصف إطار توليد لغة محكوم للتحكم في المكونات والتشغيل، والتوصيل والزج، والذي يسمح للمستخدم البشري بإدخال رموز التحكم المتعددة (الموضوعات).في سياق جيل القصة الآلي، يسمح هذا للمستخدم البشري بفقد أو سيطرة حبيبات دقيقة على الموضوعات التي ستظهر في القصة التي تم إنشاؤها، ويمكنها حتى السماح للمواضيع المتداخلة والمخلقة.نظرا لأن إطار عملنا، يعمل مع نماذج جيل مختلفة، يتحكم في جيل من رموز التحكم المستمرة الموزونة مع الحفاظ على الجمل التي تم إنشاؤها بطلاقة، مما يدل على إمكانية مزج قوية.
We describe a Plug-and-Play controllable language generation framework, Plug-and-Blend, that allows a human user to input multiple control codes (topics). In the context of automated story generation, this allows a human user lose or fine grained control of the topics that will appear in the generated story, and can even allow for overlapping, blended topics. We show that our framework, working with different generation models, controls the generation towards given continuous-weighted control codes while keeping the generated sentences fluent, demonstrating strong blending capability.
References used
https://aclanthology.org/
Story generation is a task that aims to automatically generate a meaningful story. This task is challenging because it requires high-level understanding of the semantic meaning of sentences and causality of story events. Naivesequence-to-sequence mod
Automated storytelling has long captured the attention of researchers for the ubiquity of narratives in everyday life. The best human-crafted stories exhibit coherent plot, strong characters, and adherence to genres, attributes that current states-of
Abstract Slang is a common type of informal language, but its flexible nature and paucity of data resources present challenges for existing natural language systems. We take an initial step toward machine generation of slang by developing a framework
We propose to control paraphrase generation through carefully chosen target syntactic structures to generate more proper and higher quality paraphrases. Our model, AESOP, leverages a pretrained language model and adds deliberately chosen syntactical
Recently, a large pre-trained language model called T5 (A Unified Text-to-Text Transfer Transformer) has achieved state-of-the-art performance in many NLP tasks. However, no study has been found using this pre-trained model on Text Simplification. Th