A Critical Analysis and Performance Benchmarking of Intrusion Detection Using the OD-IDS2022 Dataset and Machine Learning Techniques ND
Abstract
Over the past decade, numerous Intrusion Detection Systems (IDS) have been developed to address the growing complexity of cybersecurity threats. To support evaluation of such systems, the Center for Excellence in Cyber Security (CoECS) at IDRBT released the OD-IDS2022 dataset [4], which integrates contemporary attack vectors and updated feature sets. While the dataset has gained attention for its relevance, our analysis highlights critical shortcomings, including severe class imbalance, redundancy in records, and inconsistencies across feature distributions, which collectively bias IDS performance evaluation. To systematically investigate these issues, we conducted a comprehensive statistical and empirical
study, employing dimensionality reduction techniques (PCA, t-SNE) and multiple supervised classifiers (Random Forest, SVM, XGBoost). Experimental results reveal that classification accuracy is overstated by up to 12% due to imbalance, while precision and recall for minority attack classes drop below 65%, yielding an overall F1-score of 0.91 and an AUC of 0.95. After applying balanced sampling strategies and refined preprocessing, we observed consistent performance improvements, with average precision increasing by 9%, recall by 11%, and F1-score reaching 0.92, alongside an AUC of 0.96. The ROC curve behavior was also analyzed to assess discrimination capability across different classes. These findings emphasize
that the dataset’s inherent limitations significantly affect IDS benchmarking, and we provide concrete recommendations for curating a more balanced and representative version of OD-IDS2022 to strengthen the robustness and generalizability of IDS evaluation frameworks.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i4.5651
This work is licensed under a Creative Commons Attribution 3.0 License.








