A Probabilistic Neural Network-Based Dynamic Perception Framework for Rehabilitation Robots Using Multi-Modal Sensor Fusion

Lin Gui, Zhenlai Chen

Abstract


The current mechanical dynamic perception methods for rehabilitation-assisted robots have low accuracy and low efficiency. In response to this problem, a robot mechanical dynamic perception model based on multi-modal sensor fusion is proposed in the research. The interaction force sensor and weight sensor are used to collect the interaction force and weight loss values during the patients' motion process, with a sample size of 5,378 collected. Then, the Kalman filtering algorithm is used for data processing. Finally, the sensor information is fused using a weighted fusion method, and a probabilistic neural network is used to determine the patient's motion, thereby achieving intelligent dynamic perception in rehabilitation training. The probabilistic neural network architecture includes the input layer, the hidden layer, and the output layer. The experiment was simulated and analyzed using MATLAB, and five volunteers were selected to carry out practical experiments. The results showed that the mechanical dynamic perception model proposed in the study could accurately perceive the motion intention of the user through multi-modal sensors and make accurate judgments. Its judgment accuracy reached 95.52%, and the response time was only 0.85 seconds. Compared with traditional rehabilitation assisted robot mechanical dynamic perception methods, this model significantly improves accuracy and efficiency. The method proposed in the study can further enhance the dynamic perception ability of robots through multi-modal sensor fusion, thereby improving the intelligence of rehabilitation training.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i27.8684

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.