@coral.diez

Exploiting objective annotations for measuring translation post-editing effort

. Proceedings of the European Association for Machine Translation, May, (2011)

Abstract

With the noticeable improvement of the overall quality of Machine Translation (MT) systems in recent years, post-editing of MT output is starting to become a common practice among human translators. However, it is well known that the quality of a given MT system can vary significantly across translation segments and that post-editing bad quality translations is a tedious task that may require more effort than translating texts from scratch. Previous research dedicated to learning quality estimation models to flag such segments has shown that models based on human annotation achieve more promising results. However, it is not clear yet what is the most appropriate form of human annotation for building such models. We experiment with models based on three annotation types (post-editing time, post-editing distance and post-editing effort scores) and show that estimations resulting from using post-editing time, a simple and objective annotation, can minimise translation postediting effort in a practical, task-based scenario. We also discuss some perspectives on the effectiveness, reliability and cost of each type of annotation.

Links and resources

Tags