Network Spectrum Resource Allocation and Optimization Based on Deep Learning and TRDM
Abstract
This study proposes a Transformer-based real-time adaptive scheduling method (TRDM) to improve performance under dynamic network loads and low latency requirements. Compared with traditional scheduling methods, TRDM shows significant advantages in throughput, latency, QoS satisfaction, and computational efficiency. To verify the effectiveness of the method, we use a dataset of 10,000 samples covering a variety of network environments and load conditions. Standard deep learning benchmarks are used in the experiments, and the performance of TRDM is compared with the state-of-the-art (SOTA) and traditional methods under the same experimental settings. Experimental results show that TRDM has significant advantages in throughput and latency, especially in scenarios with high load and high realtime requirements. By adopting the Soft Actor-Critic (SAC) reinforcement learning algorithm for model training, we test TRDM in multiple network environments and verify its superiority in real-time adaptability and computational efficiency. Compared with the SOTA method, TRDM shows better realtime and adaptability, especially under low latency and dynamic load conditions. In addition, TRDM maintains high computational efficiency when dealing with complex network environments, and is suitable for practical application scenarios such as large-scale IoT or urban cellular networks. The experimental settings include hyperparameters such as a 6-layer self-attention network of the Transformer architecture, a learning rate of 0.001, and a batch size of 128. Through comparative experiments, TRDM outperforms existing methods in multiple key performance indicators, and the performance difference is statistically significant. This study provides an effective solution for real-time network scheduling and provides an important reference for future deployment in applications such as the IoT and remote urban networks.
Full Text:
PDFReferences
Zheng J, Pan YR, Jiang SR, Chen ZH, Yan F. A federated learning and deep q-network-based cooperative resource allocation algorithm for multi-level services in mobile-edge computing networks. IEEE Transactions on Cognitive Communications and Networking. 2023; 9(6): 1734-1745.
Ahmed QW, Garg S, Rai A, Ramachandran M, Jhanjhi NZ, Masud M, Baz M. AI-Based resource allocation techniques in wireless sensor internet of things networks in energy efficiency with data optimization. Electronics. 2022; 11(13): 13.
Sharma N, Kumar K. Evolutionary multi-objective optimization algorithm for resource allocation using deep neural network in ultra-dense networks. IEEE Transactions on Network and Service Management. 2024; 21(2): 2111-2124.
Zhang YZ, Li JF, Mu GC, Chen XY. A DRL-based resource allocation for IRS-enhanced semantic spectrum sharing networks. eurasip Journal on Advances in Signal Processing. 2024; 2024(1): 17.
Ye PG, Wang YG, Tang WX. S-MFRL: spiking mean field reinforcement learning for dynamic resource allocation of D2D Networks. IEEE Transactions on Vehicular Technology. 2023; 72(1): 1032-1047.
Andrade AG, Anzaldo A. Accelerated resource allocation based on experience retention for B5G networks. Journal of Network and Computer Applications. 2023; 213: 12.
Qian LP, Yang C, Han HM, Wu Y, Meng LM. Learning driven resource allocation and SIC ordering in EH relay aided NB-IoT networks. IEEE Communications Letters. 2021; 25(8): 2619-2623.
Do QV, Koo I. Actor-critic deep learning for efficient user association and bandwidth allocation in dense mobile networks with green base stations. Wireless Networks. 2019; 25(8): 5057-5068.
Zhang Y, Cheng ZJ, Guo D, Yuan SY, Ma TT, Zhang ZY. Downlink resource allocation for NOMA-based hybrid spectrum access in cognitive network. Communications. 2023; 20(9): 171-184.
Zhi Y, Tian J, Deng XF, Qiao JP, Lu DJ. Deep reinforcement learning-based resource allocation for D2D communications in heterogeneous cellular networks. digital communications and networks. 2022; 8(5): 834-842.
Song XQ, Hua YQ, Yang Y, Xing GL, Liu F, Xu L, Song TC. Distributed resource allocation with federated learning for delay-sensitive IoV services. IEEE Transactions on Vehicular Technology. 2024; 73(3): 4326-4336.
Cao YM, Zhang GM, Li GB, Zhang J. A deep q-network based-resource allocation scheme for massive MIMO-NOMA. IEEE Communications Letters. 2021; 25(5). 1544-1548.
Lee WS, Seo JB. Deep learning-aided channel allocation scheme for WLAN. IEEE Wireless Communications Letters. 2023; 12(6): 1007-1011.
Wang YJ, Wang SH, Liu L. Joint beamforming and power allocation using deep learning for D2D communication in heterogeneous networks. IET Communications. 2020; 14(18): 3095-3101.
Khan WU, Nguyen TN, Jameel F, Jamshed MA, Pervaiz H, Javed MA, Jantti R. Learning-Based resource allocation for backscatter-aided vehicular networks. IEEE Transactions on Intelligent Transportation Systems. 2022; 23(10): 19676-19690.
Di Z, Zhong Z, Pengfei Q, Hao Q, Bin S. Resource allocation in multi-user cellular networks: a transformer-based deep reinforcement learning approach. China Communications. 2024: 20.
Yan DD, Ng BK, Ke W, Lam CT. Deep reinforcement learning based resource allocation for network slicing with massive MIMO. IEEE Access. 2023; 11: 75899-75911.
Liao XM, Shi J, Li Z, Zhang L, Xia BQ. A model-driven deep reinforcement learning heuristic algorithm for resource allocation in ultra-dense cellular networks. IEEE Transactions on Vehicular Technology. 2020; 69(1): 983-997.
Wang CW, Deng DH, Xu LX, Wang WD. Resource scheduling based on deep reinforcement learning in UAV assisted emergency communication networks. IEEE Transactions on Communications. 2022; 70(6): 3834-3848.
Han RX, Li HX, Knoblock EJ, Gasper MR, Apaza RD. Joint velocity and spectrum optimization in urban air transportation system via multi-agent deep reinforcement learning. IEEE Transactions on Vehicular Technology. 2023; 72(8): 9770-9782.
Peng HX, Shen XM. Deep reinforcement learning based resource management for multi-access edge computing in vehicular networks. IEEE Transactions on Network Science and Engineering. 2020; 7(4): 2416-2428.
Wu W, Yang FC, Zhou FH, Wu QH, Hu RQ. Intelligent resource allocation for irs-enhanced ofdm communication systems: a hybrid deep reinforcement learning approach. IEEE Transactions on Wireless Communications. 2023; 22(6): 4028-4042.
Li Z, Guo CL. Multi-Agent deep reinforcement learning based spectrum allocation for D2D underlay communications. IEEE Transactions on Vehicular Technology. 2020; 69(2): 1828-1840.
Yang HL, Zhao J, Lam KY, Xiong ZH, Wu QQ, Xiao L. Distributed deep reinforcement learning-based spectrum and power allocation for heterogeneous networks. IEEE Transactions on Wireless Communications. 2022; 21(9): 6935-6948.
Cui YP, Shi HJ, Wang RY, He P, Wu DP, Huang XY. Multi-Agent reinforcement learning for slicing resource allocation in vehicular networks. IEEE Transactions on Intelligent Transportation Systems. 2024; 25(2): 2005-2016.
Liu XL, Yu JD, Wang J, Gao Y. Resource allocation with edge computing in IoT networks via machine learning. IEEE Internet of Things Journal. 2020; 7(4). 3415-3426.
Ju Y, Chen YC, Cao ZW, Liu L, Pei QQ, Xiao M, et al. Joint secure offloading and resource allocation for vehicular edge computing network: a multi-agent deep reinforcement learning approach. IEEE Transactions on Intelligent Transportation Systems. 2023; 24(5): 5555-5569.
Shang C, Sun Y, Luo H, Guizani M. Computation offloading and resource allocation in Noma-Mec: a deep reinforcement learning approach. IEEE Internet of Things Journal. 2023; 10(17): 15464-15476.
Li X, Lu LY, Ni W, Jamalipour A, Zhang DL, Du HF. Federated multi-agent deep reinforcement learning for resource allocation of vehicle-to-vehicle communications. IEEE Transactions on Vehicular Technology. 2022; 71(8): 8810-8824.
Tefera MK, Zhang SB, Jin ZW. Deep reinforcement learning-assisted optimization for resource allocation in downlink OFDMA cooperative systems. Entropy. 2023; 25(3): 19.
DOI: https://doi.org/10.31449/inf.v49i13.7374

This work is licensed under a Creative Commons Attribution 3.0 License.