في معظم سيناريوهات جهاز التقطير أو سرقة الترجمة الآلية العصبية، يتم استخدام فرضية التسجيل أعلى النموذج المستهدف (المعلم) لتدريب نموذج جديد (طالب).إذا كانت الترجمات المرجعية متاحة أيضا، فيمكن إظهار الفرضيات الأفضل (فيما يتعلق بالمراجع) وفرضيات فقراء إما إما إما إما أو إشرافها.تستكشف هذه الورقة طريقة مشهد أخذ العينات (تشذيب، فرضية ترشيحها وتشكيلها، واستكريسيا ومجمديها) مع الإنجليزية إلى التشيكية والإنجليزية إلى طرازات MT الألمانية باستخدام مقاييس تقييم MT القياسية.نظرا لأن الإرتفاع الدقيق والتركيبة مع البيانات الأصلية يؤدي إلى أداء أفضل عند مقارنتها بالتدريب فقط على البيانات الأصلية أو المركبة أو تركيها المباشر.
In most of neural machine translation distillation or stealing scenarios, the highest-scoring hypothesis of the target model (teacher) is used to train a new model (student). If reference translations are also available, then better hypotheses (with respect to the references) can be oversampled and poor hypotheses either removed or undersampled. This paper explores the sampling method landscape (pruning, hypothesis oversampling and undersampling, deduplication and their combination) with English to Czech and English to German MT models using standard MT evaluation metrics. We show that careful oversampling and combination with the original data leads to better performance when compared to training only on the original or synthesized data or their direct combination.
References used
https://aclanthology.org/
Scheduled sampling is widely used to mitigate the exposure bias problem for neural machine translation. Its core motivation is to simulate the inference scene during training by replacing ground-truth tokens with predicted tokens, thus bridging the g
We implemented a neural machine translation system that uses automatic sequence tagging to improve the quality of translation. Instead of operating on unannotated sentence pairs, our system uses pre-trained tagging systems to add linguistic features
Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages. Most of these models feature an important corpus sampling step in the process of accumulating training data in different languages, to en
In image captioning, multiple captions are often provided as ground truths, since a valid caption is not always uniquely determined. Conventional methods randomly select a single caption and treat it as correct, but there have been few effective trai
Deep-learning models for language generation tasks tend to produce repetitive output. Various methods have been proposed to encourage lexical diversity during decoding, but this often comes at a cost to the perceived fluency and adequacy of the outpu