ﻻ يوجد ملخص باللغة العربية
The applications of recurrent neural networks in machine translation are increasing in natural language processing. Besides other languages, Bangla language contains a large amount of vocabulary. Improvement of English to Bangla machine translation would be a significant contribution to Bangla Language processing. This paper describes an architecture of English to Bangla machine translation system. The system has been implemented with the encoder-decoder recurrent neural network. The model uses a knowledge-based context vector for the mapping of English and Bangla words. Performances of the model based on activation functions are measured here. The best performance is achieved for the linear activation function in encoder layer and the tanh activation function in decoder layer. From the execution of GRU and LSTM layer, GRU performed better than LSTM. The attention layers are enacted with softmax and sigmoid activation function. The approach of the model outperforms the previous state-of-the-art systems in terms of cross-entropy loss metrics. The reader can easily find out the structure of the machine translation of English to Bangla and the efficient activation functions from the paper.
The word order between source and target languages significantly influences the translation quality in machine translation. Preordering can effectively address this problem. Previous preordering methods require a manual feature design, making languag
Existing work in translation demonstrated the potential of massively multilingual machine translation by training a single model able to translate between any pair of languages. However, much of this work is English-Centric by training only on data w
Neural Machine Translation (NMT) has become a popular technology in recent years, and the encoder-decoder framework is the mainstream among all the methods. Its obvious that the quality of the semantic representations from encoding is very crucial an
In this paper, we propose Neural Phrase-to-Phrase Machine Translation (NP$^2$MT). Our model uses a phrase attention mechanism to discover relevant input (source) segments that are used by a decoder to generate output (target) phrases. We also design
The encoder-decoder based neural machine translation usually generates a target sequence token by token from left to right. Due to error propagation, the tokens in the right side of the generated sequence are usually of poorer quality than those in t