نقترح على التمييز المستقبلي لتوليد (Fudge)، وسيلة مرنة وحيونية للجيل المسيطر الذي يتم التحكم فيه.بالنظر إلى نموذج G موجه مسبقا لتوليد النص من توزيع الفائدة، يتيح الافعال تكييف السمة المرغوبة A (على سبيل المثال، الشكلية) أثناء الوصول إلى تسجيل الدخول فقط إلى سجل الإخراج G فقط.تتعلم Fudge مؤشر سمة يعمل على تسلسل جزئي، ويستخدم مخرجات هذا المؤشر لضبط الاحتمالات الأصلية G.نظهر أن نماذج الهرج تطل على تحلل بايزي للتوزيع الشرطي ل G معين من السمة A.علاوة على ذلك، يمكن أن يؤدي fudge بسهولة تنبؤات للسمات المتعددة المرغوبة.نقوم بتقييم الهراء في ثلاث مهام --- الانتهاء من الإكمال في الشعر، والتحكم في الموضوع في توليد اللغة، وتغيير الشكليات في الترجمة الآلية - - ومراقبة المكاسب في جميع المهام الثلاث.
We propose Future Discriminators for Generation (FUDGE), a flexible and modular method for controlled text generation. Given a pre-existing model G for generating text from a distribution of interest, FUDGE enables conditioning on a desired attribute a (for example, formality) while requiring access only to G's output logits. FUDGE learns an attribute predictor operating on a partial sequence, and uses this predictor's outputs to adjust G's original probabilities. We show that FUDGE models terms corresponding to a Bayesian decomposition of the conditional distribution of G given attribute a. Moreover, FUDGE can easily compose predictors for multiple desired attributes. We evaluate FUDGE on three tasks --- couplet completion in poetry, topic control in language generation, and formality change in machine translation --- and observe gains in all three tasks.
References used
https://aclanthology.org/
Transformer-based pre-trained language models boost the performance of open-domain dialogue systems. Prior works leverage Transformer-based pre-trained language models to generate texts with desired attributes in two general approaches: (1) gradient-
Large pre-trained neural models have recently shown remarkable progress in text generation. In this paper, we propose to generate text conditioned on the structured data (table) and a prefix (the written text) by leveraging the pre-trained models. We
Abstract Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate
Recent developments in neural networks have led to the advance in data-to-text generation. However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications. In this study, w
One of the challenges in information retrieval (IR) is the vocabulary mismatch problem, which happens when the terms between queries and documents are lexically different but semantically similar. While recent work has proposed to expand the queries