Reinforcement Learning Algorithms for Adaptive Load Balancing in Publish/Subscribe Systems: PPO, UCB, and Epsilon-Greedy Approaches

Rana Zuhair Al-Shaikh, Muna M. Jawad Al-Nayar, Ahmed M. Hasan

Abstract


This research addresses load balancing challenges in publish/subscribe (Pub/Sub) systems through a comprehensive exploration of reinforcement learning (RL) techniques. Algorithms (epsilon-greedy, Upper Confidence Bound, round-robin, and least connections) were evaluated to establish baseline performance metrics.  Building on this foundation, we develop enhanced versions of epsilon-greedy and UCB algorithms tailored to the Pub/Sub context. Additionally, we introduce a custom approach utilizing Proximal Policy Optimization (PPO) to learn adaptive load-balancing policies.  Our work provides a thorough comparative analysis of diverse RL methods, offering insights into their strengths and weaknesses in optimizing Pub/Sub system performance.  Experimental results demonstrate the potential of RL, particularly our developed algorithms, to significantly improve latency, throughput, and overall system efficiency compared to traditional load-balancing strategies. Notably, the PPO-based approach exhibits superior performance during burst traffic and failure scenarios, highlighting its resilience and adaptability in dynamic environments.

 


Full Text:

PDF

References


A. Javahar, R. Ananth, K. K. Arun Ritthik, and R. Dharun, “Efficient load balancing for Micro Services based applications,” in 2023 International Conference on Computer Communication and Informatics (ICCCI), IEEE, Jan. 2023, pp. 1–5. doi: 10.1109/ICCCI56745.2023.10128431.

G. Barlas, “Load balancing,” in Multicore and GPU Programming, Elsevier, 2023, pp. 887–941. doi: 10.1016/B978-0-12-814120-5.00022-6.

D. I. Sukhoplyuev and A. N. Nazarov, “Analysis of Application-Level Load Balancing Algorithms,” in 2023 Systems of Signals Generating and Processing in the Field of on Board Communications, IEEE, Mar. 2023, pp. 1–4. doi: 10.1109/IEEECONF56737.2023.10092019.

M. G. Spina, G. M. Marotta, S. Gualtieri, and F. De Rango, “Topic Load Balancing in a multi IoT Gateways Scenario under Publish/Subscribe Paradigm,” in 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), IEEE, Jan. 2022, pp. 521–522. doi: 10.1109/CCNC49033.2022.9700606.

D. Man, W. Yang, and G. Tian, “Polymorphic Load Balancing Algorithm Based on Packet Classification,” in Proceedings of the 2nd International Conference on Telecommunications and Communication Engineering, New York, NY, USA: ACM, Nov. 2018, pp. 258–261. doi: 10.1145/3291842.3291911.

S. Gilbert, U. Meir, A. Paz, and G. Schwartzman, “On the Complexity of Load Balancing in Dynamic Networks,” in Proceedings of the 33rd ACM Symposium on Parallelism in Algorithms and Architectures, New York, NY, USA: ACM, Jul. 2021, pp. 254–264. doi: 10.1145/3409964.3461808.

H. Desai and R. Oza, “A study of dynamic load balancing in grid environment,” in 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), IEEE, Mar. 2016, pp. 128–132. doi: 10.1109/WiSPNET.2016.7566105.

K. , P. S. , Y. N. , L. S. and T. B. Panjwani, “Load Balancing, Optimal Routing and Scheduling in Hyper-Local.,” International Journal of Computer Applications, p. 975, Dec. 2015.

D. Wu et al., “Reinforcement learning for communication load balancing: approaches and challenges,” Front Comput Sci, vol. 5, May 2023, doi: 10.3389/fcomp.2023.1156064.

J. Xu, H. Guo, H.-W. Shen, M. Raj, S. W. Wurster, and T. Peterka, “Reinforcement Learning for Load-Balanced Parallel Particle Tracing,” IEEE Trans Vis Comput Graph, vol. 29, no. 6, pp. 3052–3066, Jun. 2023, doi: 10.1109/TVCG.2022.3148745.

J. Wang, “A reinforcement learning-based network load balancing mechanism,” in Fifth International Conference on Computer Information Science and Artificial Intelligence (CISAI 2022), Y. Zhong, Ed., SPIE, Mar. 2023, p. 162. doi: 10.1117/12.2667915.

M. Shahakar, S. Mahajan, and L. Patil, “Load Balancing in Distributed Cloud Computing: A Reinforcement Learning Algorithms in Heterogeneous Environment,” International Journal on Recent and Innovation Trends in Computing and Communication, vol. 11, no. 2, pp. 65–74, Mar. 2023, doi: 10.17762/ijritcc.v11i2.6130.

Y. Liu and X. Yu, “A Distributed Publish–Subscribe Algorithm Based on Spatial Text Information Flow,” Journal of Web Engineering, Jul. 2023, doi: 10.13052/jwe1540-9589.2231.

R. K. Naha and M. Othman, “Cost-aware service brokering and performance sentient load balancing algorithms in the cloud,” Journal of Network and Computer Applications, vol. 75, pp. 47–57, Nov. 2016, doi: 10.1016/j.jnca.2016.08.018.

S. V. Nethaji and M. Chidambaram, “Differential Grey Wolf Load-Balanced Stochastic Bellman Deep Reinforced Resource Allocation in Fog Environment,” Applied Computational Intelligence and Soft Computing, vol. 2022, pp. 1–13, Aug. 2022, doi: 10.1155/2022/3183701.

A. Konar, D. Wu, Y. T. Xu, S. Jang, S. Liu, and G. Dudek, “Communication Load Balancing via Efficient Inverse Reinforcement Learning,” in ICC 2023 - IEEE International Conference on Communications, IEEE, May 2023, pp. 472–478. doi: 10.1109/ICC45041.2023.10279136.

Houidi O., Zeghlache D., Perrier V., Anh Q., Pham T., Huin N., Leguay J., and Medagliani P. “Constrained Deep Reinforcement Learning for Smart Load Balancing,” in 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), IEEE, Jan. 2022, pp. 207–215. doi: 10.1109/CCNC49033.2022.9700657.

N. M. M. Muna Mohammed Jawad, “RHLB: Improved Routing Load Balancing Algorithm Based on Hybrid Policy,” Journal of University of Babylon for Engineering Sciences, , vol. 27, no. 1, Feb. 2019.

H. A. J. Saja Dheyaa Khudhur, “DLSTM-MSF: Distributed LSTM Models for Multimedia Streaming Workload Forecasting Based on Kafka Environment,” Iraqi Journal of Computers, Communications, Control, and Systems Engineering, vol. 24, no. 1, pp. 103–118, Mar. 2024.

E. K. H. Eman K Ibraheem, “Load Balancing Performance Optimization for LI-Fi/Wi-Fi HLR Access Points Using Particle Swarm Optimization and DL Algorithm,” International Journal of Intelligent Engineering & Systems, vol. 15, no. 6, Nov. 2022.

A. M. H. K. G. Safanah Mudheher Raafat, “Enhanced Performance of Consensus Wireless Sensor Controlled System via Particle Swarm Optimization Algorithm,” Journal of Engineering, vol. 23, no. 9, Sep. 2017.

Y. A. N. I. sajjad shamkhi jaber, “Task Scheduling in Cloud Computing Based on The Cuckoo Search Algorithm,” Iraqi Journal of Computer, Communication, Control and System Engineering, vol. 22, no. 1, pp. 86–96, Mar. 2022, doi: 10.33103/uot.ijccce.22.1.9.

H. H. A. Ekhlas K. Hamza, “Indoor Localization System Using Wireless Sensor Network,” Iraqi Journal of Computers, Communications, Control, and Systems Engineering, vol. 18, no. 1, 2018.

Sasu Tarkoma, PUBLISH/SUBSCRIBE SYSTEMS. John Wiley, 2012.

E. Jafarnejad Ghomi, A. Masoud Rahmani, and N. Nasih Qader, “Load-balancing algorithms in cloud computing: A survey,” 2017, Academic Press. doi: 10.1016/j.jnca.2017.04.007.

N. Sanghi, Deep Reinforcement Learning with Python. Berkeley, CA: Apress, 2021. doi: 10.1007/978-1-4842-6809-4.

H. N and P. A. G, “A Brief Study of Deep Reinforcement Learning with Epsilon-Greedy Exploration,” International Journal of Computing and Digital Systems, vol. 11, no. 1, pp. 541–551, Jan. 2022, doi: 10.12785/ijcds/110144.

Y. Gu, Y. Cheng, C. L. P. Chen, and X. Wang, “Proximal Policy Optimization With Policy Feedback,” IEEE Trans Syst Man Cybern Syst, vol. 52, no. 7, pp. 4600–4610, Jul. 2022, doi: 10.1109/TSMC.2021.3098451.




DOI: https://doi.org/10.31449/inf.v49i7.6895

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.