Integration of Depthwise Separable CNN and Seq2Seq for Enhanced Chinese TTS Systems

Huanxin Dou, Zhenhua Zhao

Abstract


Speech synthesis technology has become increasingly common in real-time broadcasting systems, smartphone voice assistants, and other applications thanks to the advancement of artificial intelligence technology. However, Chinese speech synthesis still faces issues such as the influence of Chinese tones, the impact of polyphonic characters on synthesis results, and the naturalness of tones. Therefore, this study designed a novel Chinese text to speech system that integrates sequence to sequence models and convolutional neural networks. Design multiple modules in the system to address challenges such as Chinese polyphonic characters and tones. Meanwhile, the convolutional neural network framework for sequence-to-sequence models adopts a hybrid architecture of depthwise separable convolution and highway network. Three aspects were optimized, including depthwise separable convolution to reduce parameter count, highway network to maintain gradient flow, and causal convolution to constrain temporal dependencies. The results show that compared to mainstream text to speech models, the proposed model has a Mel frequency cepstral distortion of 4.5288dB, an average opinion score of 4.15, a parameter count of 68.4487 × 106, and a training time of 16.57 hours. This system can ensure the naturalness of speech synthesis and improve synthesis efficiency. This study provides an efficient solution for the practical application of Chinese text to speech, suitable for scenarios with limited computing resources. Its lightweight design concept has guiding significance for the development of low-power speech synthesis technology.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i21.9743

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.