Deep Q-Network-Based Reinforcement Learning for Medium and Short-Term Reserve Capacity Classification in Power Systems
Abstract
Modern power systems encounter significant challenges in maintaining reliability and operational balance due to the intermittent nature of renewable energy sources and variable demand. Accurate prediction and optimization of reserve capacity are essential to ensure grid stability, especially within medium and short-term regulatory timeframes. Traditional reserve estimation methods often lack the adaptability required for dynamic operational data, leading to inefficient reserve allocation. This study introduces a Deep Reinforcement Learning (DRL) framework aimed at enhancing reserve capacity classification and regulation. A Deep Q-Network (DQN)-based agent is developed and trained on a Reserve Capacity Prediction (RCP) dataset consisting of 2000-time steps and ten critical system features. The data underwent preprocessing steps such as categorical encoding, normalization, and environment modeling. The DQN receives a 9-dimensional input vector and uses two hidden ReLU-activated layers (64 and 32 units) to predict reserve capacity classes: Low, Optimal, and High. A reward mechanism and experience replay were applied during training. Experimental results show the DQN outperforms Logistic Regression, Random Forest, and SVM, achieving 90% accuracy, 92% precision, 88% recall, 89.8% F1-score, and 0.86 MCC. This approach shows promise for intelligent and adaptive reserve management in power systems.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i34.9288

This work is licensed under a Creative Commons Attribution 3.0 License.