من المعروف أن الأساليب التاريخية تحتوي على أخطاء قدمتها أساليب OCR (التعرف على الأحرف البصرية) المستخدمة في عملية الرقمنة، غالبا ما يقال إنها مهينة أداء أنظمة NLP.تصحيح هذه الأخطاء يدويا هي عملية تستغرق وقتا طويلا، وقد تم الاعتماد على جزء كبير من الأساليب التلقائية على القواعد أو تعلم الآلات الخاضعة للإشراف.نحن نبني على العمل السابق على استخراج مواز تلقائي بالكامل لبيانات متوازية لتدريب نموذج NMT تسلسل تستند إلى الطرف (الترجمة الآلية العصبية) لإجراء تصحيح خطأ OCR المصمم للغة الإنجليزية، وتكييفه إلى الفنلندية من خلال اقتراح الحلول التي تأخذالمورفولوجيا الغنية للغة في الاعتبار.تظهر طريقة جديدة لدينا أداء متزايد في حين تبقى غير مؤسس بالكامل، مع الاستفادة الإضافية للتطبيع الإملائي.تتوفر شفرة المصدر والنماذج على Github و Zenodo.
Historical corpora are known to contain errors introduced by OCR (optical character recognition) methods used in the digitization process, often said to be degrading the performance of NLP systems. Correcting these errors manually is a time-consuming process and a great part of the automatic approaches have been relying on rules or supervised machine learning. We build on previous work on fully automatic unsupervised extraction of parallel data to train a character-based sequence-to-sequence NMT (neural machine translation) model to conduct OCR error correction designed for English, and adapt it to Finnish by proposing solutions that take the rich morphology of the language into account. Our new method shows increased performance while remaining fully unsupervised, with the added benefit of spelling normalisation. The source code and models are available on GitHub and Zenodo.
References used
https://aclanthology.org/
The availability of parallel sentence simplification (SS) is scarce for neural SS modelings. We propose an unsupervised method to build SS corpora from large-scale bilingual translation corpora, alleviating the need for SS supervised corpora. Our met
State-of-the-art approaches to spelling error correction problem include Transformer-based Seq2Seq models, which require large training sets and suffer from slow inference time; and sequence labeling models based on Transformer encoders like BERT, wh
In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summar
Language technology is already largely adopted by most Language Service Providers (LSPs) and integrated into their traditional translation processes. In this context, there are many different approaches to applying Post-Editing (PE) of a machine tran
Substantial amounts of work are required to clean large collections of digitized books for NLP analysis, both because of the presence of errors in the scanned text and the presence of duplicate volumes in the corpora. In this paper, we consider the i