نقترح امتداد منظم لتوليد اللغة الشرطية ذات السياق ثنائي الاتجاه، أو تستقيم "مستوحاة من نظرية الدلالية الإطار.يتم توفير التوجيه من خلال إحدى مناهضين: (1) ضبط النموذج الدقيق، والتكييف مباشرة على الإطارات الرمزية الملاحظة، و (2) امتداد جديد لإزالة فك تشفير العمليات المعجمية المعجمية بشكل متعرز.تؤكد التقييمات التلقائية والبشرية أن الجيل الموجهة للأطر الموجهة يسمح بالتلاعب الصريح في دلالات Infill Inhantics المقصودة، مع الحد الأدنى من الخسارة في الاستئمان من النص الذي تم إنشاؤه الإنسان.تنطبق طرقنا بمرونة على مجموعة متنوعة من سيناريوهات الاستخدام، ونحن نقدم عرض ويب تفاعلي.
We propose a structured extension to bidirectional-context conditional language generation, or infilling,'' inspired by Frame Semantic theory. Guidance is provided through one of two approaches: (1) model fine-tuning, conditioning directly on observed symbolic frames, and (2) a novel extension to disjunctive lexically constrained decoding that leverages frame semantic lexical units. Automatic and human evaluations confirm that frame-guided generation allows for explicit manipulation of intended infill semantics, with minimal loss in distinguishability from human-generated text. Our methods flexibly apply to a variety of use scenarios, and we provide an interactive web demo.
References used
https://aclanthology.org/
Emotion cause extraction (ECE) aims to extract the causes behind the certain emotion in text. Some works related to the ECE task have been published and attracted lots of attention in recent years. However, these methods neglect two major issues: 1)
Offensive language detection (OLD) has received increasing attention due to its societal impact. Recent work shows that bidirectional transformer based methods obtain impressive performance on OLD. However, such methods usually rely on large-scale we
We study the problem of generating arithmetic math word problems (MWPs) given a math equation that specifies the mathematical computation and a context that specifies the problem scenario. Existing approaches are prone to generating MWPs that are eit
Natural Language Generation (NLG) for task-oriented dialogue systems focuses on communicating specific content accurately, fluently, and coherently. While these attributes are crucial for a successful dialogue, it is also desirable to simultaneously
In this paper, we study the utilization of pre-trained language models to enable few-shotNatural Language Generation (NLG) in task-oriented dialog systems. We introduce a system consisting of iterative self-training and an extensible mini-template fr