Deep Reinforcement Learning Framework for Real-Time Personalized Travel Route Recommendation via LSTM-CNN and Multi-Head Attention Fusion
Abstract
With the development of smart tourism, traditional static recommendations struggle to cope with the dynamic changes in RTCs (Real-Time Contexts) such as traffic and weather in urban environments. Furthermore, they cannot integrate UPs (User Preferences) with real-time contextual awareness, resulting in poor recommendation adaptability. This paper aims to design a highly adaptable, personalized, and dynamic TR (Travel Route) recommendation model. The model leverages LSTM-CNN for feature extraction and Multi-Head Attention Mechanism (MHAM) for feature fusion. The system is trained using an Actor-Critic (AC) framework. Evaluation metrics such as HR@5, HR@10, coverage, and median response latency (MRL) are used to assess performance. Based on DRL (Deep Reinforcement Learning), this model captures UP differences through the construction of an LSTM-CNN (Long Short-Term Memory-Convolutional Neural Network) network, achieving personalization. A MHAM (Multi-Head Attention Mechanism) is applied to deeply integrate UPs with real-time contextual states such as traffic and weather. A CRF (Composite Reward Function) is designed by jointly modeling preferences and context, and end-to-end training is achieved using an AC (Actor-Critic) framework. Experiments show that on the FS-NYC (Four Square–New York City dataset) and TCI (Tokyo Check-ins dataset), the paper's model achieves a Top-5 hit rate of 53% and a Top-10 hit rate of 84%, with a MRL (Median Response Latency) of 1.07 seconds. It also significantly improves adaptability to dynamic scenarios compared to baseline methods. This research provides a personalized recommendation paradigm that combines high accuracy with real-time responsiveness for dynamic travel scenarios, effectively improving user experience and service quality.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i22.10944
This work is licensed under a Creative Commons Attribution 3.0 License.








