اعتمدت العمل الحديث نماذج من التفكير العملي لتوليد لغة إعلامية، على سبيل المثال، تعليق الصورة.نقترح استرخاء بسيط ولكنه فعال للغاية من فك التشفير العقلاني تماما، بناء على نهج متزايدي وتزايدي على مستوى الشخصيات لمعلومات التصوير العصبي المعني بالمعلومات بشكل غير رسمي.نحن نطبق مكتوبة وسريعة "وبطيئة"، المتكلم الذي ينطبق على التفكير العملي في بعض الأحيان (الكلمة فقط - في البداية)، مع عدم انسقاد نموذج اللغة.في تقييمنا، نجد أن المعلومات المتزايدة من خلال فك التشفير العملي تنخفض بشكل عام الجودة وعلى نحو حد ما بشكل حدسي، يزيد من التكرار في التسميات التوضيحية.ومع ذلك، فإن متحدثنا المختلط يحقق توازنا جيدا بين الجودة والمعلوماتية.
Recent work has adopted models of pragmatic reasoning for the generation of informative language in, e.g., image captioning. We propose a simple but highly effective relaxation of fully rational decoding, based on an existing incremental and character-level approach to pragmatically informative neural image captioning. We implement a mixed, fast' and slow', speaker that applies pragmatic reasoning occasionally (only word-initially), while unrolling the language model. In our evaluation, we find that increased informativeness through pragmatic decoding generally lowers quality and, somewhat counter-intuitively, increases repetitiveness in captions. Our mixed speaker, however, achieves a good balance between quality and informativeness.
References used
https://aclanthology.org/
thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking fast and slow, thinking f
Abstract Tracking dialogue states to better interpret user goals and feed downstream policy learning is a bottleneck in dialogue management. Common practice has been to treat it as a problem of classifying dialogue content into a set of pre-defined s
Narrative generation is an open-ended NLP task in which a model generates a story given a prompt. The task is similar to neural response generation for chatbots; however, innovations in response generation are often not applied to narrative generatio
We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require co
Many statistical models have high accuracy on test benchmarks, but are not explainable, struggle in low-resource scenarios, cannot be reused for multiple tasks, and cannot easily integrate domain expertise. These factors limit their use, particularly