Intrusion Detection Systems (IDS) are crucial components of network security, yet traditional IDS models often fail to cope with rapidly evolving adversarial attacks that exploit their static nature. This study proposes a novel approach, Evolving Adversarial Training (EAT), to enhance the adaptability and robustness of AI-powered IDS against dynamic threats. The EAT framework integrates continuous model evolution with advanced adversarial training techniques, enabling the IDS to dynamically adjust to new attack patterns. Experimental results demonstrate that the EAT framework significantly enhances IDS performance, leading to increased detection accuracy and reduced false positive rates compared to conventional methods. These findings emphasize the potential of EAT in fortifying network defenses against evolving cyber threats, offering a promising avenue for future research in scalable and adaptive IDS solutions that can effectively combat the complexities of modern cyber adversaries. The research explores three key objectives: dynamic adaptation and adversarial training, continuous learning and enhanced threat detection, and robustness and generalization. By focusing on these objectives, the study aims to develop AI-powered IDS that can effectively navigate the ever-changing cyber threat landscape. The research methodology includes data collection, model architecture design, training and evaluation, continuous learning, simulation, and real-world testing, all aimed at enhancing the resilience of AI-powered IDS against adversarial attacks. By systematically following this framework, the study intends to enhance the security system of IDS through the effective implementation of EAT.
Published in | American Journal of Computer Science and Technology (Volume 7, Issue 3) |
DOI | 10.11648/j.ajcst.20240703.16 |
Page(s) | 115-121 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2024. Published by Science Publishing Group |
Intrusion Detection Systems (IDS), Machine Learning (ML), Artificial Intelligence (AI), Evolving Adversarial Training (EAT), Deep Learning, Cybersecurity, Deep Neural Networks (DNN)
[1] | A. Khraisat, I. Gondal, P. Vamplew, and J. Kamruzzaman, ‘Survey of intrusion detection systems: techniques, datasets and challenges’, Cybersecurity, vol. 2, no. 1, Dec. 2019, |
[2] | T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, et al., "Language Models are Unsupervised Multitask Learners," arXiv preprint arXiv: 1412.6572, 2015. [Online]. Available: |
[3] | D. P. Kingma and M. Welling, "Auto-Encoding Variational Bayes," arXiv preprint arXiv: 1312.6199, 2013. [Online]. Available: |
[4] |
Y. Kim, J. J. Chae, and H. Lee, "Fusion of High-Resolution Satellite and Drone Imagery for Land Cover Classification," International Journal of Applied Earth Observation and Geoinformation, vol. 126, pp. 103554, Oct. 2023. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S2665917423001630 |
[5] |
A. Sharma, "A REVIEW OF ENHANCING INTRUSION DETECTION SYSTEMS FOR CYBERSECURITY USING ARTIFICIAL INTELLIGENCE (AI)," Research Gate, Apr. 2023. [Online]. Available:
https://www.researchgate.net/publication/372483419_A_REVIEW_OF_ENHANCING_INTRUSION_DETECTION_SYSTEMS_FOR_CYBERSECURITY_USING_ARTIFICIAL_INTELLIGENCE_AI [Accessed: 28-Jun-2024]. |
[6] | X. W. Ding, L. K. L. Li, and R. Kai, "AIDTF: Adversarial training framework for network intrusion detection," Comput. Secur., vol. 123, p. 102924, May 2023, |
[7] | Lee, Myungcheol, Daesung Moon and Ikkyun Kim. “Real-time Abnormal Behavior Detection System based on Fast Data.” Conference on Information Security and Cryptology (2015). |
[8] |
‘What Is Security Information and Event Management (SIEM)? - Palo Alto Networks. Accessed: Jun. 25, 2024. [Online]. Available:
https://www.paloaltonetworks.com/cyberpedia/what-is-security-information-and-event-management-SIEM |
[9] | T. Zheng, C. Chen, and K. Ren, ‘Distributionally Adversarial Attack’, Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, Jul. 2019, |
[10] | Y. Zhang, M. Zhang, and H. Zhao, "D3GU: Multi-Target Active Domain Adaptation via Enhancing Domain Alignment," in Proc. IEEE/CVF Winter Conf. on Applications of Computer Vision (WACV), pp. 1-10, Jan. 2024. [Online]. Available: |
[11] | A. J. Simpkin, F. Sánchez Rodríguez, S. Mesdaghi, A. Kryshtafovych, and D. J. Rigden, ‘Evaluation of model refinement in CASP14’, Proteins: Structure, Function and Bioinformatics, vol. 89, no. 12, pp. 1852–1869, Dec. 2021, |
[12] | F. Cohen, ‘Simulating cyber attacks, defenses, and consequences’, Comput Secur, vol. 18, no. 6, pp. 479–518, Jan. 1999, |
[13] |
‘A Comprehensive Guide on How to Monitor Your Models in Production’. Accessed: Jun. 25, 2024. [Online]. Available:
https://neptune.ai/blog/how-to-monitor-your-models-in-production-guide |
[14] | M. Hort, Z. Chen, J. M. Zhang, M. Harman, and F. Sarro, ‘Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey’, ACM Journal on Responsible Computing, vol. 1, no. 11, Nov. 2023, |
[15] |
Un-risk Model Deployment with Differential Privacy | Craft AI’. Accessed: Jun. 25, 2024. [Online]. Available:
https://en.craft.ai/post/the-key-to-un-risk-model-deployment-unpacking-differential-privacy |
[16] | M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., "TensorFlow: A system for large-scale machine learning," arXiv preprint arXiv: 1810.11711, 2018. [Online]. Available: |
[17] | L. He, Z. Wang, S. Yang, T. Liu and Y. Huang, "Generalizing Projected Gradient Descent for Deep-Learning-Aided Massive MIMO Detection," in IEEE Transactions on Wireless Communications, vol. 23, no. 3, March 2024, |
APA Style
Affan, A. M. (2024). Evolving Adversarial Training (EAT) for AI-Powered Intrusion Detection Systems (IDS). American Journal of Computer Science and Technology, 7(3), 115-121. https://doi.org/10.11648/j.ajcst.20240703.16
ACS Style
Affan, A. M. Evolving Adversarial Training (EAT) for AI-Powered Intrusion Detection Systems (IDS). Am. J. Comput. Sci. Technol. 2024, 7(3), 115-121. doi: 10.11648/j.ajcst.20240703.16
@article{10.11648/j.ajcst.20240703.16, author = {Ahmed Muktadir Affan}, title = {Evolving Adversarial Training (EAT) for AI-Powered Intrusion Detection Systems (IDS) }, journal = {American Journal of Computer Science and Technology}, volume = {7}, number = {3}, pages = {115-121}, doi = {10.11648/j.ajcst.20240703.16}, url = {https://doi.org/10.11648/j.ajcst.20240703.16}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajcst.20240703.16}, abstract = {Intrusion Detection Systems (IDS) are crucial components of network security, yet traditional IDS models often fail to cope with rapidly evolving adversarial attacks that exploit their static nature. This study proposes a novel approach, Evolving Adversarial Training (EAT), to enhance the adaptability and robustness of AI-powered IDS against dynamic threats. The EAT framework integrates continuous model evolution with advanced adversarial training techniques, enabling the IDS to dynamically adjust to new attack patterns. Experimental results demonstrate that the EAT framework significantly enhances IDS performance, leading to increased detection accuracy and reduced false positive rates compared to conventional methods. These findings emphasize the potential of EAT in fortifying network defenses against evolving cyber threats, offering a promising avenue for future research in scalable and adaptive IDS solutions that can effectively combat the complexities of modern cyber adversaries. The research explores three key objectives: dynamic adaptation and adversarial training, continuous learning and enhanced threat detection, and robustness and generalization. By focusing on these objectives, the study aims to develop AI-powered IDS that can effectively navigate the ever-changing cyber threat landscape. The research methodology includes data collection, model architecture design, training and evaluation, continuous learning, simulation, and real-world testing, all aimed at enhancing the resilience of AI-powered IDS against adversarial attacks. By systematically following this framework, the study intends to enhance the security system of IDS through the effective implementation of EAT. }, year = {2024} }
TY - JOUR T1 - Evolving Adversarial Training (EAT) for AI-Powered Intrusion Detection Systems (IDS) AU - Ahmed Muktadir Affan Y1 - 2024/09/29 PY - 2024 N1 - https://doi.org/10.11648/j.ajcst.20240703.16 DO - 10.11648/j.ajcst.20240703.16 T2 - American Journal of Computer Science and Technology JF - American Journal of Computer Science and Technology JO - American Journal of Computer Science and Technology SP - 115 EP - 121 PB - Science Publishing Group SN - 2640-012X UR - https://doi.org/10.11648/j.ajcst.20240703.16 AB - Intrusion Detection Systems (IDS) are crucial components of network security, yet traditional IDS models often fail to cope with rapidly evolving adversarial attacks that exploit their static nature. This study proposes a novel approach, Evolving Adversarial Training (EAT), to enhance the adaptability and robustness of AI-powered IDS against dynamic threats. The EAT framework integrates continuous model evolution with advanced adversarial training techniques, enabling the IDS to dynamically adjust to new attack patterns. Experimental results demonstrate that the EAT framework significantly enhances IDS performance, leading to increased detection accuracy and reduced false positive rates compared to conventional methods. These findings emphasize the potential of EAT in fortifying network defenses against evolving cyber threats, offering a promising avenue for future research in scalable and adaptive IDS solutions that can effectively combat the complexities of modern cyber adversaries. The research explores three key objectives: dynamic adaptation and adversarial training, continuous learning and enhanced threat detection, and robustness and generalization. By focusing on these objectives, the study aims to develop AI-powered IDS that can effectively navigate the ever-changing cyber threat landscape. The research methodology includes data collection, model architecture design, training and evaluation, continuous learning, simulation, and real-world testing, all aimed at enhancing the resilience of AI-powered IDS against adversarial attacks. By systematically following this framework, the study intends to enhance the security system of IDS through the effective implementation of EAT. VL - 7 IS - 3 ER -