MADDPG-Deep-QNet: Multi-Agent Deep Reinforcement Learning for Day-Ahead Power Balance Optimization
Abstract
A reliable overall power supply on the grid depends on day-ahead power planning when electricity demand changes. Traditional optimization methods struggle to account for the dynamic nature and complexity of power supply systems. The purpose of this research is to offer a multi-agent deep reinforcement learning (MADRL) approach for optimizing day-ahead power balance strategies to ensure steady power supply capacity while discussing the problems of dynamic and complex energy grids. The dataset includes historical data on power use and generation, as well as real-time demand, renewable energy outputs, and system stability indices. Data are cleaned and normalized to account for missing values and outliers, ensuring consistency and accuracy. The Fast Fourier Transform (FFT) converts time-series power data into frequency components, enabling identification of demand and generation patterns. This aids in extracting relevant features for optimizing day-ahead power balance strategies. The research aims to develop a proposed Multi-Agent Deep Deterministic Policy Gradient Driven Deep Q-Network (MADDPG-Deep-QNet) model combines Multi-Agent Deep Deterministic Policy Gradient with Deep Q-Network principles, enabling multiple agents to coordinate and optimize power source allocation, ensuring stable day-ahead power supply, reduced costs, and improved grid reliability in the proposed method. The MADDPG-Deep-QNet strategy outperforms existing optimization techniques, resulting in significant energy cost savings and grid stability, with a load forecasting MAPE of 11.05, along with better MAE, MSE, RMSE, and R². In terms of power supply capacity, the model outperforms existing methods. This research highlights MADRL's potential for optimizing day-ahead power balance techniques, offering a scalable solution to improve grid stability and ensure continuous power delivery.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i19.9482
This work is licensed under a Creative Commons Attribution 3.0 License.








