في هذه الورقة، نقدم طريقة جديدة لتدريب نموذج تحسين الكتابة تتكيف مع لغة الكاتب الأولى (L1) التي تتجاوز تصحيح الخطأ النحوي (GEC).بدون استخدام بيانات التدريب المشروح، فإننا نعتمد فقط على نماذج اللغة المدربة مسبقا بشكل جيد مع الترجمة المرجانية المتوازية المحاذاة مع الترجمة الآلية.نحن نقيم نموذجنا مع شركة كورسا للأوراق الأكاديمية المكتوبة باللغة الإنجليزية من قبل علماء L1 البرتغالية و L1 الإسبان وشركة مرجعية من الخبراء الإنجليزية الأكاديمية.نظرا لأن طرازنا قادر على معالجة الكتابة المحددة التي أثرت على L1 والأظاهرة اللغوية أكثر تعقيدا من الأساليب الحالية، مما يتفوق على ما يمكن أن يحققه نظام GEC للحكومة في هذا الصدد.الكود والبيانات لدينا مفتوحة للباحثين الآخرين.
In this paper, we present a new method for training a writing improvement model adapted to the writer's first language (L1) that goes beyond grammatical error correction (GEC). Without using annotated training data, we rely solely on pre-trained language models fine-tuned with parallel corpora of reference translation aligned with machine translation. We evaluate our model with corpora of academic papers written in English by L1 Portuguese and L1 Spanish scholars and a reference corpus of expert academic English. We show that our model is able to address specific L1-influenced writing and more complex linguistic phenomena than existing methods, outperforming what a state-of-the-art GEC system can achieve in this regard. Our code and data are open to other researchers.
References used
https://aclanthology.org/
Grammatical error correction (GEC) suffers from a lack of sufficient parallel data. Studies on GEC have proposed several methods to generate pseudo data, which comprise pairs of grammatical and artificially produced ungrammatical sentences. Currently
Although grammatical error correction (GEC) has achieved good performance on texts written by learners of English as a second language, performance on low error density domains where texts are written by English speakers of varying levels of proficie
Modern transformer-based language models are revolutionizing NLP. However, existing studies into language modelling with BERT have been mostly limited to English-language material and do not pay enough attention to the implicit knowledge of language,
Legal texts routinely use concepts that are difficult to understand. Lawyers elaborate on the meaning of such concepts by, among other things, carefully investigating how they have been used in the past. Finding text snippets that mention a particula
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words. Previous works have shown that incorporating span-level information over consecutive words i