COMMENTARY: AI has been deployed faster than the industry can secure it. Whether it’s LLM-based assistants,
GenAI-powered workflows, or agentic AI automating decisions, traditional security tooling was never designed for this.
Firewalls,
EDR, SIEM,
DLP — none were built for models that hallucinate, systems that evolve, or prompts that function as covert execution environments.
[
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
In most cases, they can’t even see the model, let alone secure it.
Yet adversaries can. From data poisoning and prompt injection to model theft and agentic subversion, attackers exploit blind spots that conventional tools can’t defend. The AI attack surface isn’t just broader — it’s fundamentally different.
Why traditional tools fall short
Most legacy security tools were built to protect deterministic systems — environments where the software follows predictable logic and outcomes. The inputs get defined, and teams can reasonably expect the outputs. AI systems, especially generative and agentic, break that mold.
AI models learn from data that’s often dynamic, proprietary, or drawn from external sources, allowing attackers to tamper with the learning process. Techniques like data poisoning let malicious actors subtly manipulate training data to produce harmful outcomes later — like tampering with the ingredients of a recipe rather than sabotaging the dish after it’s made.
Even after training, attackers can exploit AI models through prompt injection. These attacks embed malicious instructions in seemingly innocent inputs, redirecting the model’s behavior without any system-level compromise. Agentic AI, which can act autonomously, introduces even greater risk. Imagine an AI assistant that reads a website embedded with covert commands — it could alter purchases, leak information, or take unauthorized actions without detection.
And these are just a few examples. Traditional web app scanners, antivirus tools, and SIEM platforms weren’t built for this reality.
Secure by Design for AI
The security community has long embraced the concept of
“Secure by Design” which focuses on embedding security from the start rather than bolting it on later. For the AI world, it’s not just a best practice — it’s a necessity.
For AI, Secure by Design means integrating protections at every stage of the machine learning security operations (MLSecOps) lifecycle: from initial scoping, model selection, and data preparation to training, testing, deployment, and monitoring. It also means adapting the classic security principles of confidentiality, integrity, and availability (CIA) to fit AI-specific contexts:
Confidentiality: Protect training datasets and model parameters from leakage or reverse engineering.
Integrity: Guard against manipulation of training data, model files, and adversarial inputs that skew outputs.
Availability: Prevent denial-of-service-style prompt attacks that exhaust system resources. A new toolset for AI security
A robust security posture requires a layered defense, one that accounts for each phase of the AI pipeline and anticipates how AI systems are manipulated both directly and indirectly. Here are a few categories to prioritize:
1. Model scanners and red teaming.
Static scanners look for backdoors, embedded biases, and unsafe outputs in the model code or architecture. Dynamic tools simulate adversarial attacks to test runtime behavior. Complement these with red teaming for AI — testing for injection vulnerabilities, model extraction risks, or harmful emergent behavior.
2. AI-specific vulnerability feeds.
Traditional CVEs don’t capture the rapidly evolving threats in AI. Organizations need real-time feeds that track vulnerabilities in model architectures, emerging prompt injection patterns, and data supply chain risks. This information helps prioritize patching and mitigation strategies unique to AI.
3. Access controls for AI.
AI models often interact with vector databases, embeddings (numerical representations of meaning used to compare concepts in high-dimensional space), and unstructured data, making it difficult to enforce traditional row- or field-level access control. AI-aware access can help regulate what content gets used during inference and ensure proper isolation between models, datasets, and users.
4. Monitoring and drift detection.
AI is dynamic — it learns, it adapts, and sometimes it drifts. Organizations need monitoring capabilities that track changes in inference patterns, detect behavioral anomalies, and log full input-output exchanges for forensics and compliance. For agentic AI, that includes tracking decision paths and mapping activity across multiple systems.
5. Policy enforcement and response automation.
Real-time safeguards that act like “AI firewalls” can intercept prompts or outputs that violate content policies, such as generating malware or leaking confidential information. Automated response mechanisms can quarantine models, revoke credentials, or roll back deployments within milliseconds—faster than a human could possibly intervene.
Frameworks to guide implementation
Fortunately, security teams don’t need to start from scratch. Several frameworks offer solid blueprints for building security into AI workflows:
OWASP Top 10 for LLMs (2025) highlights specific risks like prompt injection, data poisoning, and insecure output handling.
MITRE ATLAS maps out the AI attack kill chain, offering tactics and mitigations from reconnaissance through exfiltration.
NIST AI-RMF offers a governance-driven approach that encompasses Map, Measure, Manage, and Govern phases to align security with risk and compliance efforts. Integrating these frameworks with MLSecOps practices ensures an organization secures the right layers, at the right time, with the right controls. Start by ensuring security teams have visibility into AI development pipelines. Build bridges between data science and engineering peers. Invest in training staff on emerging threats and specialized tooling.
Securing AI isn’t just a tooling challenge — it’s a strategic shift. As AI systems evolve, so must our approach to risk, accountability, and visibility. The real priority isn’t just protecting infrastructure — it’s enabling secure innovation at scale.
Diana Kelley, chief information security officer, Protect AISC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.