تهدف ترجمة جهاز الوثائق إلى ترجمة جملة المصدر إلى اللغة المستهدفة بحضور معلومات سياقية إضافية.ومع ذلك، فإنه يعاني عادة من نقص البيانات ثنائية اللغة الوثيقة.لعلاج هذا، هنا نقترح نهجا ما قبل السياق البسيط والفعال في السياق، والذي يستحق الاستفادة من كورسا واسعة النطاق الخارجي.ينفذ النموذج المقترح توليد جملة جملة لالتقاط تبعية الجملة المتعددة في الوثيقة المستهدفة، والترجمة عبر الجملة الصريعة للاستفادة بشكل أفضل من المعلومات السياقية القيمة.توضح تجارب شاملة أن نهجنا يمكن أن تحقق أداء أحدث على ثلاثة مجموعات بيانات معيار، مما يتفوق بشكل كبير على مجموعة متنوعة من الأساس.
Document machine translation aims to translate the source sentence into the target language in the presence of additional contextual information. However, it typically suffers from a lack of doc-level bilingual data. To remedy this, here we propose a simple yet effective context-interactive pre-training approach, which targets benefiting from external large-scale corpora. The proposed model performs inter sentence generation to capture the cross-sentence dependency within the target document, and cross sentence translation to make better use of valuable contextual information. Comprehensive experiments illustrate that our approach can achieve state-of-the-art performance on three benchmark datasets, which significantly outperforms a variety of baselines.
References used
https://aclanthology.org/
Pre-training (PT) and back-translation (BT) are two simple and powerful methods to utilize monolingual data for improving the model performance of neural machine translation (NMT). This paper takes the first step to investigate the complementarity be
Document-level neural machine translation (NMT) has proven to be of profound value for its effectiveness on capturing contextual information. Nevertheless, existing approaches 1) simply introduce the representations of context sentences without expli
Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence
Pre-trained Transformer language models (LM) have become go-to text representation encoders. Prior research fine-tunes deep LMs to encode text sequences such as sentences and passages into single dense vector representations for efficient text compar
Recent researches show that pre-trained models (PTMs) are beneficial to Chinese Word Segmentation (CWS). However, PTMs used in previous works usually adopt language modeling as pre-training tasks, lacking task-specific prior segmentation knowledge an