High-Precision Photogrammetric 3D Modeling Technology Based on Multi-Source Data Fusion and Deep Learning-Enhanced Feature Learning Using Internet of Things Big Data
Abstract
As technology advances and application demands grow, high-precision three-dimensional (3D) modeling is increasingly essential for urban planning, disaster management, and cultural heritage protection. This study presents a high-precision photogrammetric 3D modeling approach with a focus on integrating multi-source data fusion techniques for complex terrains. The methodology incorporates aerial imagery, LiDAR data, ground survey data, and meteorological corrections, covering the entire workflow from data preprocessing, feature extraction, and registration to multi-source data fusion. Key innovations include an adaptive weight adjustment strategy, global optimization registration techniques, and deep learningassisted feature learning, all contributing to significant improvements in model accuracy and reliability. Experimental results show a X% improvement in spatial accuracy and a Y% reduction in mean squared error (MSE), along with enhanced morphological structure recovery and visual effects. These improvements have been validated through practical applications and received positive feedback from users. The detailed technical implementation of the data fusion algorithms, along with the quantitative performance metrics, further demonstrates the efficacy of the proposed methodology in real-world scenarios.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i11.7137

This work is licensed under a Creative Commons Attribution 3.0 License.