لتحديث ترجمات النظام الأساسي الياباني بناء على تعديلاتهم، نحتاج إلى النظر في الترجمة؛ "وهذا هو، يجب علينا فقط تعديل التعبيرات ذات الصلة بالتعديل والاحتفاظ بالآخرين لتجنب سوء فهم محتوياتها. في هذه الورقة، نقدم تقييما تقييما وجزعة لتحسين تقييمات التركيز. يسمى Metric لدينا درجة شاملة للترجمة التفاضلية: (ISDIT). يتكون ISDIT من عاملين: (1) يتذكر N-Gram التعبيرات التي تتأثر بتعديلها و (2) الدقة N-Gram للمخرجات مقارنة بالرجوع إليها. يحل هذه المترية محل واحد موجود للهوية من خلال حساب جودة الترجمة في وقت واحد من التعبيرات التي تم تغييرها بالإضافة إلى ذلك من التعبيرات دون تغيير. كما نصدر حديثا على تجميع وجزعة للترجمة اليابانية للتعديل جزئيا تؤمن الترجمات بعد التعديل، في حين أن كوربوس التقييم الحالي لا. مع متري والجوربوس، ندرس أداء طرق الترجمة الحالية لترجمات التعديل الجزئي الياباني.
For updating the translations of Japanese statutes based on their amendments, we need to consider the translation focality;'' that is, we should only modify expressions that are relevant to the amendment and retain the others to avoid misconstruing its contents. In this paper, we introduce an evaluation metric and a corpus to improve focality evaluations. Our metric is called an Inclusive Score for DIfferential Translation: (ISDIT). ISDIT consists of two factors: (1) the n-gram recall of expressions unaffected by the amendment and (2) the n-gram precision of the output compared to the reference. This metric supersedes an existing one for focality by simultaneously calculating the translation quality of the changed expressions in addition to that of the unchanged expressions. We also newly compile a corpus for Japanese partially amendment translation that secures the focality of the post-amendment translations, while an existing evaluation corpus does not. With the metric and the corpus, we examine the performance of existing translation methods for Japanese partially amendment translations.
References used
https://aclanthology.org/
Scripts -- prototypical event sequences describing everyday activities -- have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information. However, to date they have proved hard to aut
We are using a semi-automated test suite in order to provide a fine-grained linguistic evaluation for state-of-the-art machine translation systems. The evaluation includes 18 German to English and 18 English to German systems, submitted to the Transl
Abstract We study learning named entity recognizers in the presence of missing entity annotations. We approach this setting as tagging with latent variables and propose a novel loss, the Expected Entity Ratio, to learn models in the presence of syste
Machine translation (MT) is currently evaluated in one of two ways: in a monolingual fashion, by comparison with the system output to one or more human reference translations, or in a trained crosslingual fashion, by building a supervised model to pr
This work introduces a simple regressive ensemble for evaluating machine translation quality based on a set of novel and established metrics. We evaluate the ensemble using a correlation to expert-based MQM scores of the WMT 2021 Metrics workshop. In