توفر تبسيط الجملة المتوازي (SS) نادرة لأوصوامل SS العصبية. نقترح طريقة غير منشأة لبناء SS Corpora من Translation Translation ثنائي اللغة واسعة النطاق، مما يخفف من الحاجة إلى SS Corporged Corge. يتم تحفيز طريقتنا عن طريق النتائج التالية: يميل نموذج الترجمة الآلية العصبية عادة إلى توليد المزيد من الرموز عالية التردد وفرق مستويات التعقيد النصية موجودة بين المصدر واللغة المستهدفة ل Translation Corpus. من خلال أخذ زوج من المصدر الجمل من Corpus Translation وترجمات مراجعها في لغة الجسر، يمكننا إنشاء بيانات SS موازية زائفة واسعة النطاق. بعد ذلك، نبقي أزواج الجملة هذه مع اختلاف أعلى تعقيد كزواج من جملة SS. يمكن أن تلبي المبنى SS Corpora مع نهج غير مدفوع التوقعات بأن الأحكام المحاذاة تحافظ على نفس المعاني وأن يكون لها اختلاف في مستويات تعقيد النص. تظهر النتائج التجريبية أن أساليب SS التي تدربت بها كوربورا تحقق النتائج من أحدث النتائج وتفوق النتائج على نتائج اللغة الإنجليزية في Wikilarge.
The availability of parallel sentence simplification (SS) is scarce for neural SS modelings. We propose an unsupervised method to build SS corpora from large-scale bilingual translation corpora, alleviating the need for SS supervised corpora. Our method is motivated by the following two findings: neural machine translation model usually tends to generate more high-frequency tokens and the difference of text complexity levels exists between the source and target language of a translation corpus. By taking the pair of the source sentences of translation corpus and the translations of their references in a bridge language, we can construct large-scale pseudo parallel SS data. Then, we keep these sentence pairs with a higher complexity difference as SS sentence pairs. The building SS corpora with an unsupervised approach can satisfy the expectations that the aligned sentences preserve the same meanings and have difference in text complexity levels. Experimental results show that SS methods trained by our corpora achieve the state-of-the-art results and significantly outperform the results on English benchmark WikiLarge.
References used
https://aclanthology.org/
Historical corpora are known to contain errors introduced by OCR (optical character recognition) methods used in the digitization process, often said to be degrading the performance of NLP systems. Correcting these errors manually is a time-consuming
This paper describes SimpleNER, a model developed for the sentence simplification task at GEM-2021. Our system is a monolingual Seq2Seq Transformer architecture that uses control tokens pre-pended to the data, allowing the model to shape the generate
To build automated simplification systems, corpora of complex sentences and their simplified versions is the first step to understand sentence complexity and enable the development of automatic text simplification systems. We present a lexical and sy
There is a shortage of high-quality corpora for South-Slavic languages. Such corpora are useful to computer scientists and researchers in social sciences and humanities alike, focusing on numerous linguistic, content analysis, and natural language pr
Unsupervised translation has reached impressive performance on resource-rich language pairs such as English-French and English-German. However, early studies have shown that in more realistic settings involving low-resource, rare languages, unsupervi