تقدم الورقة تجارب في الترجمة الآلية العصبية مع القيود المعجمية في لغة غنية مورمية.على وجه الخصوص، نقدم طريقة واستنادا إلى فك التشفير المقيد والتي تتعامل مع الأشكال المصدرة للإدخالات المعجمية ولا تتطلب أي تعديل بيانات التدريب أو الهندسة المعمارية النموذجية.لتقييم فعاليتها ونقوم بإجراء تجارب في سيناريوهات مختلفة: عام ومخصص خاص.قارنا طريقنا مع ترجمة خط الأساس، وهي ترجمة بدون قيود معجمية ومن حيث سرعة الترجمة وجودة الترجمة.لتقييم مدى جودة معالجة القيود ونقترح مقاييس تقييم جديدة تأخذ في الاعتبار وجود وتنسيب وازدواجية وصحة الانهيار المصطلحات المعجمية في جملة الإخراج.
The paper presents experiments in neural machine translation with lexical constraints into a morphologically rich language. In particular and we introduce a method and based on constrained decoding and which handles the inflected forms of lexical entries and does not require any modification to the training data or model architecture. To evaluate its effectiveness and we carry out experiments in two different scenarios: general and domain-specific. We compare our method with baseline translation and i.e. translation without lexical constraints and in terms of translation speed and translation quality. To evaluate how well the method handles the constraints and we propose new evaluation metrics which take into account the presence and placement and duplication and inflectional correctness of lexical terms in the output sentence.
References used
https://aclanthology.org/
Terminological consistency is an essential requirement for industrial translation. High-quality, hand-crafted terminologies contain entries in their nominal forms. Integrating such a terminology into machine translation is not a trivial task. The MT
Machine translation usually relies on parallel corpora to provide parallel signals for training. The advent of unsupervised machine translation has brought machine translation away from this reliance, though performance still lags behind traditional
Neural Machine Translation (NMT) approaches employing monolingual data are showing steady improvements in resource-rich conditions. However, evaluations using real-world lowresource languages still result in unsatisfactory performance. This work prop
Many NLP models operate over sequences of subword tokens produced by hand-crafted tokenization rules and heuristic subword induction algorithms. A simple universal alternative is to represent every computerized text as a sequence of bytes via UTF-8,
The neural machine translation approach has gained popularity in machine translation because of its context analysing ability and its handling of long-term dependency issues. We have participated in the WMT21 shared task of similar language translati