COMMENTARY: Let me open with a confession: I don’t trust AI. And frankly, nobody really should.I don’t care how many tokens it’s trained on, how many parameters it boasts, or how often a team calls it “our copilot.”If AI systems are running wild with full access to a network, files, APIs, or production data — without oversight, identity verification, or enforcement controls — the team has effectively built a cyber threat with the keys to the kingdom.[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.] That’s why it’s time we have a serious talk about zero-trust for AI. Not just as a catchy idea, but as a necessary evolution of access control in an era where the actors on a network are no longer just humans with passwords — they’re machines that don’t sleep, don’t forget, and don’t know when they’ve been manipulated.Welcome to the age of machine-to-machine trust, in which AI needs the same rigorous controls teams apply to their people — possibly more.Verify every AI identity: Every AI agent — whether it’s a chatbot, a summarization tool, a code co-pilot, or an RPA bot — should have its own unique, verifiable identity. Machine identities should be onboarded and offboarded just like employees. They should authenticate, register, and rotate credentials like any other digital actor. And make their behavior traceable back to them with cryptographic certainty. No shared service accounts. No anonymous endpoints. No “we just whitelisted the tool to make it work faster.” If the AI has no identity, it’s a ghost in the system — and a breach waiting to happen. Least privilege isn’t optional: What does AI actually need access to? If it’s writing marketing copy, does it need access to internal financials? If it’s analyzing help desk tickets, does it really need read/write access to the production environment? Of course not. But AI systems are often over-permissioned by default, either because nobody knows exactly what they’ll need yet — or because granting blanket access is easier than building scoped policies. Here’s the problem: when AI apps get compromised, those privileges become weapons. Zero-trust demands least privilege — and that applies to machines, too. No more AI agents with “God Mode” access. They should get no more rights than the minimum required to do their jobs. And continuously evaluate those rights based on context. Log everything, trust nothing: Want to know how an AI system got compromised? It’s not possible to figure out unless the team logs and audits every action it takes — prompted or unprompted. A zero-trust program for AI means full telemetry: every prompt, every output, every API call, every data interaction. Not to micromanage — though let’s be honest, some AI copilots probably need micromanaging — but to ensure accountability and support forensic investigations when something inevitably goes sideways. And here’s the real kicker: the team needs real-time anomaly detection on those logs. Because AI gone rogue doesn’t always look like a hacker — it can look like 3,000 “harmless” queries that accidentally extract the company’s entire customer dataset. Don’t just secure users — secure systems: Most organizations aren’t ready for the shift: A zero-trust program must evolve from user-centric to system-centric. Today, we’re used to enforcing MFA, role-based access, and device posture checks on our human employees. But AI doesn’t carry a phone for push notifications. It doesn’t take lunch breaks. It’s always on — and potentially always leaking. We need policy engines that can enforce security at the point of action — blocking access when behavior deviates, terminating sessions when risk spikes, and quarantining AI agents that exhibit suspicious behavior. This isn’t about stifling innovation. It’s about keeping our most powerful, most tireless systems within guardrails. Treat AI like a competent, but untrusted intern. Smart? Sure. Helpful? Absolutely. But can we trust it to operate without oversight?Never.As more organizations lean on AI to code, write, analyze, predict, and even take autonomous actions, we must extend zero-trust principles to include every machine identity, every automated process, and every AI agent in our environment. It’s time to stop assuming AI is here to help — and start verifying that it isn’t here to hurt.Because when something goes wrong — and it will — CISOs and their teams will wish they had put their AI "copilots" on a shorter leash.Denny LeCompte, chief executive officer, Portnox SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.
Identity, Zero trust, AI/ML, AI benefits/risks
Four ways to build a zero-trust program for the AI world

AI-Powered Zero Trust Endpoint Security System Amidst Digital Network, Futuristic Cybersecurity Architecture Concept
An In-Depth Guide to Identity
Get essential knowledge and practical strategies to fortify your identity security.
Related Events
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds