A Lightweight Translation Architecture for Embedded Devices via Multilingual BERT Distillation and Quantization

Wang Yiqing

Abstract


In order to solve the problems of difficult deployment and high computational overhead of multilingual BERT models in embedded devices, this paper proposes a lightweight translation model integrating knowledge distillation technology. The teacher-student knowledge transfer mechanism uses a combination of layer-wise and attention-based distillation strategies, along with optimization techniques such as pruning and 8-bit quantization. BLEU scores were evaluated by comparing the student model against the teacher model and baseline systems, showing competitive translation quality. By constructing a knowledge transfer mechanism between the teacher model and the student model, combined with optimization strategies such as pruning and quantification, the synergistic improvement of model compression and reasoning speed is achieved. The experimental results show that the model has a BLEU value of 28.7 in the WMT-14 English-German task, which is only 1.4 points lower than the teacher model, and retains about 95.3% of the translation quality; The accuracy rate on the XNLI cross-language reasoning dataset reaches 78.3%, which is only 3.1% lower than the teacher model. On the embedded device Jetson Nano, the inference latency of the distilled student model dropped from 1280ms of the teacher model to 195ms, achieved through optimization techniques such as hardware acceleration, resulting in a boost speed of approximately 6.56 times. The proposed model achieves an exceptional compression, reducing the model size from the original 4.2 GB to just 650MB, a reduction of 86.4%. This size reduction is achieved with minimal quality loss, with a BLEU drop of no more than 1.8, ensuring that the compressed model retains most of the performance of the original model. The compressed model has been successfully deployed on edge platforms, including the Raspberry Pi 4B, making it highly suitable for resource-constrained environments. In terms of parameter quantity, the original mBERT has about 1100M parameters, and the distillation model is 350M. After combining pruning and 8-bit quantization, only 137.5 M is left, and the inference speed is increased to 8 times that of the original model. In addition, by introducing the attention distillation mechanism in low-resource scenarios, the model's BLEU score improves by 4.2%, demonstrating the mechanism's effectiveness in enhancing semantic alignment for languages with limited resources. The power consumption test shows that the average power consumption of the student model is 4-6W, which is about 35% lower than that of the original model. Additionally, the memory footprint of the student model on the Raspberry Pi 4B is measured at 320MB during inference, a significant reduction compared to the 1.5GB required by the original mBERT model. These optimizations not only improve translation efficiency and energy efficiency but also provide a highly feasible solution for future deployment of multilingual smart devices.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i28.9992

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.