تصف هذه الورقة Simplener، وهو نموذج تم تطويره لمهمة تبسيط الجملة في GEM-2021.نظامنا عبارة عن بنية محولات SEQ2SEQ أحادية مونولجة تستخدم الرموز المراقبة معلقة مسبقا إلى البيانات، مما يسمح للنموذج بتشكيل التبسيط الذي تم إنشاؤه وفقا للسمات التي تريدها المستخدم.بالإضافة إلى ذلك، نظهر أن البيانات التدريبية NER - بيانات التدريب قبل الاستخدام يساعد على تثبيت تأثير الرموز السيطرة وتحسين الأداء العام للنظام بشكل كبير.ونحن نوظف أيضا embeddings المسبق للحد من البيانات الخاصة بالبيانات والسماح للنموذج بإنتاج المزيد من النواتج القابلة للتعميم.
This paper describes SimpleNER, a model developed for the sentence simplification task at GEM-2021. Our system is a monolingual Seq2Seq Transformer architecture that uses control tokens pre-pended to the data, allowing the model to shape the generated simplifications according to user desired attributes. Additionally, we show that NER-tagging the training data before use helps stabilize the effect of the control tokens and significantly improves the overall performance of the system. We also employ pretrained embeddings to reduce data sparsity and allow the model to produce more generalizable outputs.
References used
https://aclanthology.org/
To build automated simplification systems, corpora of complex sentences and their simplified versions is the first step to understand sentence complexity and enable the development of automatic text simplification systems. We present a lexical and sy
The quality of fully automated text simplification systems is not good enough for use in real-world settings; instead, human simplifications are used. In this paper, we examine how to improve the cost and quality of human simplifications by leveragin
This paper describes the submission by NUIG-DSI to the GEM benchmark 2021. We participate in the modeling shared task where we submit outputs on four datasets for data-to-text generation, namely, DART, WebNLG (en), E2E and CommonGen. We follow an app
Recently, a large pre-trained language model called T5 (A Unified Text-to-Text Transfer Transformer) has achieved state-of-the-art performance in many NLP tasks. However, no study has been found using this pre-trained model on Text Simplification. Th
The availability of parallel sentence simplification (SS) is scarce for neural SS modelings. We propose an unsupervised method to build SS corpora from large-scale bilingual translation corpora, alleviating the need for SS supervised corpora. Our met