تصف هذه الورقة محاولة لإعادة إنتاج تجربة سابقة، التي أجرتها سابقا من قبل المؤلف، والتي تقارن نصوص NLG التحوط وغير المتحركة كجزء من التحدي المشترك المتصنع.كان جهد الاستنساخ هذا قادرا فقط على تكرار النتائج جزئيا من الدراسة الأصلية.يقترح المحللون من جهد الاستنساخ هذا أنهما من الممكن تكرار الجوانب الإجرائية لدراسة سابقة، يمكن تكرار النتائج أن تكون أكثر تحديا لأن الاختلافات في نوع المشارك يمكن أن يكون لها تأثير محتمل.
This paper describes an attempt to reproduce an earlier experiment, previously conducted by the author, that compares hedged and non-hedged NLG texts as part of the ReproGen shared challenge. This reproduction effort was only able to partially replicate results from the original study. The analyisis from this reproduction effort suggests that whilst it is possible to replicate the procedural aspects of a previous study, replicating the results can prove more challenging as differences in participant type can have a potential impact.
References used
https://aclanthology.org/
Writers often repurpose material from existing texts when composing new documents. Because most documents have more than one source, we cannot trace these connections using only models of document-level similarity. Instead, this paper considers metho
Many NLG tasks such as summarization, dialogue response, or open domain question answering, focus primarily on a source text in order to generate a target response. This standard approach falls short, however, when a user's intent or context of work
The NLP field has recently seen a substantial increase in work related to reproducibility of results, and more generally in recognition of the importance of having shared definitions and practices relating to evaluation. Much of the work on reproduci
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered accep
Natural Language Generation (NLG) evaluation is a multifaceted task requiring assessment of multiple desirable criteria, e.g., fluency, coherency, coverage, relevance, adequacy, overall quality, etc. Across existing datasets for 6 NLG tasks, we obser