Data Ordering Patterns for Neural Machine Translation: An Empirical Study


Abstract in English

Recent works show that ordering of the training data affects the model performance for Neural Machine Translation. Several approaches involving dynamic data ordering and data sharding based on curriculum learning have been analysed for the their performance gains and faster convergence. In this work we propose to empirically study several ordering approaches for the training data based on different metrics and evaluate their impact on the model performance. Results from our study show that pre-fixing the ordering of the training data based on perplexity scores from a pre-trained model performs the best and outperforms the default approach of randomly shuffling the training data every epoch.

Download