نحن تصف أنظمة NMT الخاصة بنا المقدمة إلى المهمة المشتركة WMT2021 في ترجمة الأخبار الإنجليزية - التشيكية: CUNI-DOCTRANSFORMER (CUBBITT على مستوى المستند) و Cuni-Marian-Baselines.نحن نحسن السابق بمعالجة أفضل من تجزئة الجملة وعلاج ما بعد معالجة الأخطاء في تحديد الأرقام والوحدات.نحن نستخدم الأخير للتجارب مع تقنيات الخلفية المختلفة.
We describe our two NMT systems submitted to the WMT2021 shared task in English-Czech news translation: CUNI-DocTransformer (document-level CUBBITT) and CUNI-Marian-Baselines. We improve the former with a better sentence-segmentation pre-processing and a post-processing for fixing errors in numbers and units. We use the latter for experiments with various backtranslation techniques.
References used
https://aclanthology.org/
This paper describes Charles University sub-mission for Terminology translation Shared Task at WMT21. The objective of this task is to design a system which translates certain terms based on a provided terminology database, while preserving high over
Pretrained language models have served as the backbone for many state-of-the-art NLP results. These models are large and expensive to train. Recent work suggests that continued pretraining on task-specific data is worth the effort as pretraining lead
When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from unexpected domains in testing. This multi-domain scenario has attracted a lot of re
The uniform information density (UID) hypothesis posits a preference among language users for utterances structured such that information is distributed uniformly across a signal. While its implications on language production have been well explored,
This paper describes Charles University sub-mission for Terminology translation shared task at WMT21. The objective of this task is to design a system which translates certain terms based on a provided terminology database, while preserving high over