Generative AI applications have gone from experimental to essential overnight with global investment in GenAI doubling in one year to $56 billion globally, according to PitchBook. With growth so have come an onslaught of new security concerns.Prompt injection, where attackers bypass LLM guardrails to unearth confidential data, now rate among OWASP Gen AI Security Project's top risks. OWASP has also flagged nearly a dozen more LLM security concerns ranging from data and model poisoning, improper output handling and excessive agency.As part of an editorial collaboration with the OWASP Gen AI Security Project, SC Media and OWASP will raise the awareness bar around secure GenAI application development, threat awareness, and risks and mitigation best practices for generative artificial intelligence application developers and security stakeholders.
Together, we’ll explore the 2025 OWASP Top 10 for LLM Applications and Generative AI through a 10-part article series. Each installment focuses on one of the risks and how to mitigate it when building or deploying AI models.
A bit about the GenAI Security Project
What began in 2023 as a Open Web Application Security Project (OWASP) community-led initiative to document LLM-specific threats has evolved into OWASP’s flagship GenAI Security Project. With contributors ranging from security engineers to policy leaders, the 2025 list reflects real-world experience and a broader global perspective. Each risk, from data poisoning and misinformation to embedding-based threats and system prompt leakage, is accompanied by practical guidance for prevention, detection, and governance.The OWASP Generative AI Security Project and SC Media equip CISOs, developers, security teams, and policymakers with practical guidance and open-source tools to navigate the rapidly evolving risks of large language models and generative AI systems. Whether you're building, deploying, or governing AI, the project is designed to support those on the front lines of secure innovation.SC Media’s coverage kicks off this week with a closer look at Prompt Injection, a now-notorious vulnerability that attackers use to hijack LLM model behavior and output. Future installments will walk through emerging issues such as System Prompt Leakage, Excessive Agency, and Vector and Embedding Weaknesses. All these risks could spell major problems for AI projects if left unaddressed.
SC Media and OWASP
This partnership is designed to magnifying OWASP's reach and simultaneously fortifying the SC Media community with actionable insights, advice and perspectives. As generative AI adoption outpaces understanding, we are stepping up to help break down the research, expand the conversation, and empower our audience of cybersecurity professionals to take action.The full OWASP Top 10 for LLM Applications 2025 report is available now at genai.owasp.org. But stay with SC Media as we unpack each of the ten risks and offer new thinking, case studies, and mitigation strategies that could help shape a safer GenAI future.
Tom Spring is Editorial Director for SC Media and is based in Boston, MA. For two decades he has worked at national publications in the leadership roles of publisher at Threatpost, executive news editor PCWorld/Macworld and technical editor at CRN. He is a seasoned cybersecurity reporter, editor and storyteller that aims always for truth and clarity.