Human-Computer Interaction Based on ASGCN Displacement Graph Neural Networks

Yiping Yang, Jijun Liu, Liang Zhao, Yuchen Yin

Abstract


Intelligent terminal devices have become a popular theme for research in recent years, but the development of intelligent terminals cannot be separated from high-quality human-computer interaction models. Behavioral action recognition is one of the main ways to realize human-computer interaction, but the current action recognition model still exists with obvious time delay and low recognition accuracy. In light of this, the study built an intelligent human action capture and recognition model using an action structured graph convolutional network in conjunction with an encoder-decoder architecture, long and short-term memory algorithms, and controlled experiments to assess the model's performance. The outcomes indicated that the loss of the proposed model after convergence on the test dataset was 0.56%, while the average accuracy was 95.39%, and both performances outperformed the control experiment. In the meantime, the suggested model's average F1 score was 89.79%, which was 11.13% and 3.82% higher than that of the experiment's control model. The suggested model exhibits some improvement in the accuracy and F1 score of action recognition, according to the experimental findings. Therefore, the research of the suggested behavior recognition model has practical value. Additionally, in the real scene behavior recognition detection experiments, the proposed model validates the viability of the model with higher accuracy and reduced delay.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v48i10.5961

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.