تركز العديد من مهام NLG مثل التلخيص أو استجابة الحوار أو سؤال المجال المفتوح، والتركيز بشكل أساسي في نص مصدر من أجل توليد استجابة مستهدفة.ومع ذلك، يقع هذا النهج القياسي، عندما يكون نية المستخدم أو سياق العمل غير قابل للاسترداد بسهولة بناء على النص المصدر هذا فقط - سيناريو الذي نقوله هو أكثر من القاعدة من الاستثناء.في هذا العمل، نجرب أن أنظمة NLG بشكل عام يجب أن تضع مستوى أعلى بكثير من التركيز على استخدام سياق إضافي، وتشير إلى أن الأهمية (كما هو مستخدم باسترجاع المعلومات) تعتبر كأداة حاسمة لتصميم النص الموجه للمستخدمالمهام - المهام.ونحن نناقش كذلك الأضرار والمخاطر المحتملة حول هذه التخصيص، وتجادل أن التصميم الحساس في القيمة يمثل طريقا حاسما للأمام من خلال هذه التحديات.
Many NLG tasks such as summarization, dialogue response, or open domain question answering, focus primarily on a source text in order to generate a target response. This standard approach falls short, however, when a user's intent or context of work is not easily recoverable based solely on that source text-- a scenario that we argue is more of the rule than the exception. In this work, we argue that NLG systems in general should place a much higher level of emphasis on making use of additional context, and suggest that relevance (as used in Information Retrieval) be thought of as a crucial tool for designing user-oriented text-generating tasks. We further discuss possible harms and hazards around such personalization, and argue that value-sensitive design represents a crucial path forward through these challenges.
References used
https://aclanthology.org/
This paper describes an attempt to reproduce an earlier experiment, previously conducted by the author, that compares hedged and non-hedged NLG texts as part of the ReproGen shared challenge. This reproduction effort was only able to partially replic
The NLP field has recently seen a substantial increase in work related to reproducibility of results, and more generally in recognition of the importance of having shared definitions and practices relating to evaluation. Much of the work on reproduci
Comment sections allow users to share their personal experiences, discuss and form different opinions, and build communities out of organic conversations. However, many comment sections present chronological ranking to all users. In this paper, I dis
Deep-learning models for language generation tasks tend to produce repetitive output. Various methods have been proposed to encourage lexical diversity during decoding, but this often comes at a cost to the perceived fluency and adequacy of the outpu
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered accep