← All Papers
DAI-2513 1 December 2025 Preprint Programme V: Computational Cognition

Autonomous Red Team AI: LLM-Guided Adversarial Security Testing

Murad Farzulla

Abstract

This technical report describes an architecture for autonomous penetration testing using LLM-guided agents operating within Kubernetes-isolated environments. The system combines RAG knowledge bases with OODA-loop decision cycles, enabling systematic vulnerability discovery while maintaining strict NetworkPolicy isolation.

Suggested Citation

Murad Farzulla (2025). Autonomous Red Team AI: LLM-Guided Adversarial Security Testing. ASCRI Working Paper DAI-2513. DOI: 10.5281/zenodo.17614726

BibTeX

@misc{farzulla2025_autonomous_red_team,
  author       = {Farzulla, Murad},
  title        = {Autonomous Red Team AI: LLM-Guided Adversarial Security Testing},
  year         = {2025},
  howpublished = {ASCRI Working Paper DAI-2513},
  doi          = {10.5281/zenodo.17614726},
  url          = {https://systems.ac/5/DAI-2513}
}

Tags

AI Safety Security Research