Mining Key Influencing Factors of Explanatory AI in Teaching Digital Decision Support System
Abstract
With the continuous acceleration of the digital transformation of education, the teaching management system puts forward higher requirements for intelligence and interpretability. This study focuses on the identification of key factors in teaching decision-making, aiming at building a decision support model that takes into account both prediction accuracy and interpretation transparency. Methods The interpretive machine learning framework combining XGBoost and SHAP was adopted, and the model was established based on 916,000 pieces of teaching behavior data. The research contents include model structure design, feature contribution analysis, variable interactive modeling and periodic interpretation change evaluation. The experimental results show that XGBoost model achieves RMSE of 3.12, MAE of 2.48, and R of 0.89, which is superior to the contrast model. SHAP explanation shows that teacher interaction frequency and homework completion rate are the most critical variables, with average SHAP values of 0.213 and 0.195 respectively. The influence of variables is distributed differently under different curriculum types, and the teaching stage has a significant regulatory effect on the interpretation structure. The enhanced model improves the user satisfaction score from 6.8 to 8.9, and the decision visualization score from 6.2 to 9.1. The research conclusion points out that the teaching prediction model with integrated interpretation mechanism has good accuracy and high transparency. The results provide data support and method path for teaching behavior optimization, personalized intervention and system decision support.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i26.10890
This work is licensed under a Creative Commons Attribution 3.0 License.








