Baseline Transliteration Corpus for Improved English-Amharic Machine Translation
Abstract
Machine translation (MT) between English and Amharic is one of the least studied
and, performance-wise, least successful topics in the MT field. We therefore propose
to apply corpus transliteration and augmentation techniques in this study to address
this issue and improve MT performance for the language pairs. This paper presents
the creation, the augmentation, and the use of an Amharic to English transliteration
corpus for NMT experiments. The created corpus has a total of 450,608 parallel
sentences before preprocessing and is used to train three different NMT architectures
after preprocessing. These models are actually built using Recurrent Neural Networks
with attention mechanism (RNN), Gated Recurrent Units (GRUs), and Transformers.
Specifically, for Transformer-based experiments, three different Transformer models
with different hyperparameters are created. Compared to previous works, the BLEU
score results of all NMT models used in this study are improved. One of the three
Transformer models, in particular, achieves the highest BLEU score ever recorded for
the language pairs.
and, performance-wise, least successful topics in the MT field. We therefore propose
to apply corpus transliteration and augmentation techniques in this study to address
this issue and improve MT performance for the language pairs. This paper presents
the creation, the augmentation, and the use of an Amharic to English transliteration
corpus for NMT experiments. The created corpus has a total of 450,608 parallel
sentences before preprocessing and is used to train three different NMT architectures
after preprocessing. These models are actually built using Recurrent Neural Networks
with attention mechanism (RNN), Gated Recurrent Units (GRUs), and Transformers.
Specifically, for Transformer-based experiments, three different Transformer models
with different hyperparameters are created. Compared to previous works, the BLEU
score results of all NMT models used in this study are improved. One of the three
Transformer models, in particular, achieves the highest BLEU score ever recorded for
the language pairs.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v47i6.4395
This work is licensed under a Creative Commons Attribution 3.0 License.