نقترح مهمة مشتركة على اختيار مثيل التدريب لعدد قليل من الجيل العصبي العصبي.أدت نماذج اللغة المحددة مسبقا على نطاق واسع إلى تحسينات مثيرة في جيل نص قليل.ومع ذلك، فإن كل العمل السابق تقريبا يطبق ما عليك سوى أخذ عينات عشوائية لتحديد مثيلات التدريب القليلة.لقد تم إيلاء الاهتمام ليس إلى استراتيجيات الاختيار وكيف سيؤثرون على أداء النموذج.دراسة استراتيجية الاختيار يمكن أن تساعدنا (1) الاستفادة القصوى من ميزانية التوضيحية الخاصة بنا في مهام المصب و (2) من أفضل النماذج الإندارية النصية ذات القليل من القصاصات.نرحب بالتقديمات التي تقدم استراتيجيات اختيارها والآثار على جودة الجيل.
We propose a shared task on training instance selection for few-shot neural text generation. Large-scale pretrained language models have led to dramatic improvements in few-shot text generation. Nonetheless, almost all previous work simply applies random sampling to select the few-shot training instances. Little to no attention has been paid to the selection strategies and how they would affect model performance. Studying the selection strategy can help us (1) make the most use of our annotation budget in downstream tasks and (2) better benchmark few-shot text generative models. We welcome submissions that present their selection strategies and the effects on the generation quality.
References used
https://aclanthology.org/
Automatic detection of critical plot information in reviews of media items poses unique challenges to both social computing and computational linguistics. In this paper we propose to cast the problem of discovering spoiler bias in online discourse as
Neural table-to-text generation models have achieved remarkable progress on an array of tasks. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in
Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learning from examples, this idea yields impressive few-shot
Conditional text generation often requires lexical constraints, i.e., which words should or shouldn't be included in the output text. While the dominant recipe for conditional text generation has been large-scale pretrained language models that are f
Natural Language Processing (NLP) is increasingly relying on general end-to-end systems that need to handle many different linguistic phenomena and nuances. For example, a Natural Language Inference (NLI) system has to recognize sentiment, handle num