Towards Document-Level Human MT Evaluation: On the Issues of Annotator Agreement, Effort and Misevaluation


Abstract in English

Document-level human evaluation of machine translation (MT) has been raising interest in the community. However, little is known about the issues of using document-level methodologies to assess MT quality. In this article, we compare the inter-annotator agreement (IAA) scores, the effort to assess the quality in different document-level methodologies, and the issue of misevaluation when sentences are evaluated out of context.

References used

https://aclanthology.org/

Download