نسأل الموضوعات سواء كانوا ينظرون إلى وجود مجموعة من النصوص، وبعضها مكتوب بالفعل، في حين يتم إنشاء آخرين تلقائيا.نحن نستخدم هذه البيانات لضبط نموذج GPT-2 لدفعه لتوليد المزيد من النصوص التي يشبه الإنسان، ومراقبة أن هذا النموذج الذي تم ضبطه بشكل جيد ينتج نصا يشوه بالفعل أكثر من النموذج الأصلي.سيحري، نظهر أن استراتيجية التقييم التلقائي لدينا ترتبط جيدا بأحكام بشرية.كما ندير تحليل لغوي تكشف عن خصائص اللغة التي تتسم بها الإنسان.
We ask subjects whether they perceive as human-produced a bunch of texts, some of which are actually human-written, while others are automatically generated. We use this data to fine-tune a GPT-2 model to push it to generate more human-like texts, and observe that this fine-tuned model produces texts that are indeed perceived more human-like than the original model. Contextually, we show that our automatic evaluation strategy well correlates with human judgements. We also run a linguistic analysis to unveil the characteristics of human- vs machine-perceived language.
References used
https://aclanthology.org/
In recent years, crowdsourcing has gained much attention from researchers to generate data for the Natural Language Generation (NLG) tools or to evaluate them. However, the quality of crowdsourced data has been questioned repeatedly because of the co
Knowledge-enriched text generation poses unique challenges in modeling and learning, driving active research in several core directions, ranging from integrated modeling of neural representations and symbolic information in the sequential/hierarchica
We propose an approach to automatically test for originality in generation tasks where no standard automatic measures exist. Our proposal addresses original uses of language, not necessarily original ideas. We provide an algorithm for our approach an
The advent of Deep Learning and the availability of large scale datasets has accelerated research on Natural Language Generation with a focus on newer tasks and better models. With such rapid progress, it is vital to assess the extent of scientific p
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this m