COMMENTARY: As agentic AI rapidly enters the enterprise, security teams face a moment of déjà vu: A few years ago, robotic process automation (RPA) bots spread through organizations so quickly that security teams were caught off guard, unable to properly authenticate and monitor them.When we look at RPA implementations today, virtually all of the bots either use shared credentials or impersonate employees. It’s common to see a group of RPA bots that all use the same employee's credentials.[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]This approach creates a security nightmare. When multiple bots share the same identity, it’s nearly impossible to attribute actions or contain breaches. In addition, duplicated credentials offer adversaries who trade in stolen credentials additional opportunities to gain access to sensitive systems. We're about to see the same pattern with AI agents – but faster and with greater consequences. The business push for AI implementation is even stronger than RPA, and many security teams remain unprepared.The key difference: AI agents aren't merely deterministic bots – they possess agency. They make decisions, access sensitive data and execute transactions with minimal human oversight. This establishes them as a genuine third identity type alongside humans and traditional machines, which means they require their own identity framework.Agents are already demonstrating how the worlds of machine identity and human identity blur and are secured. Agents are workloads that can scale on demand, communicate and work autonomously at machine speed, and get recycled immediately after completing work. They require a unique and universal workload identity.Zero standing privileges: AI agents should not maintain persistent access rights, but should receive just-in-time, just-enough access for specific tasks. Continuous monitoring: Given their agency, AI agents require ongoing monitoring at the transaction and session levels. Step-up challenges: Like humans, AI agents should face additional verification for sensitive actions. Behavioral analytics: Detecting anomalous behavior requires understanding normal AI agent patterns. Kill switch capability: Every manufacturing floor has an emergency stop button. If we're entrusting business operations to AI agents, we must maintain the ability to halt their actions immediately when necessary. The AI kill switch is identity activated: with every AI agent uniquely identified we can “disconnect” misbehaving agents. Security architects must participate in AI agent initiatives from day one, just as they do for critical infrastructure projects. Too often, security teams join after design decisions are locked in. Architects are needed to address how agents are secured, protected from compromise, and controlled if they become unsafe.
Identity, AI benefits/risks
An identity security crisis looms in the age of agentic AI

(Adobe Stock)
An In-Depth Guide to Identity
Get essential knowledge and practical strategies to fortify your identity security.
Related Events
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds