AI/ML, Identity, IAM Technologies
AI-to-app connections are the new shadow IT: Why we need guardrails for autonomous agents

Adobe Stock Images
It's 10 a.m. Do you know what your AI agents are doing?Odds are you don't. The rapid adoption of fully autonomous AI agents, which can interact with other applications and services, carry out tasks and make decisions on their own, is leaving human supervisors in the dark and creating a shadow IT of potentially rogue AI tools.Unauthorized use of AI based on large language models is spinning out of control. Half of the legal and finance professionals who responded to a recent survey said they had already used unauthorized AI tools in the workplace. Another 23% said they hadn't yet but would.At the same time, organizations are adopting two new protocols for aiding communication between applications and AI agents, aimed at giving the agents smoother access to third-party user accounts and information. Yet the Model Context Protocol has optional security safeguards and can be exploited using common techniques. The Agent-to-Agent protocol has built-in security but can still be abused with malicious results. Neither protocol manages or monitors AI agent activity, and both let AI agents interact without human supervision."Unmanaged AI agents introduce a new class of security risk because they act autonomously, often with broad access to sensitive systems," says Arnab Bose, Chief Product Officer of the Okta Platform. "Without proper controls, these agents can unintentionally leak data, trigger actions across connected applications, or be exploited by threat actors through techniques like prompt injection or token hijacking."All this irrational exuberance makes the AI boom feel like the dot-com boom of the late 1990s, when new networking and file-transfer protocols were deployed with scant consideration for security. The explosion of malware, phishing, ransomware attacks and other types of online cybercrime that followed was the direct result of those poor decisions.Let's not make the same mistakes again. It's time to implement strict management of AI agents so that they can be monitored, logged, restricted and protected. Existing identity-security frameworks like the OAuth protocol can be expanded to do just that.Tool poisoning in which secret commands, invisible to the human user, are hidden in the instructions given to AI agents Command injection due to lack of proper authentication standards and integrity controls Rug pulls, or when tools update their own code after installation and approval Typosquatting, as the MCP permits different tools to have the same name Cross-tool contamination in which different MCP servers connected to the same AI agent interfere with each others' operations Permission reuse when permissions granted for one task are reused for more tasks without reauthorization As for A2A, its flaws lie in the self-presented agent cards, which don't need to be entirely accurate. Rogue AI agents can exaggerate their capabilities on their cards to "win" tasks over other agents that are better qualified.While the MCP nor A2A protocols support the open OAuth authorization standard, neither actively manages or monitors AI agents, and security teams and users may be unaware of what's going on under the hood."Organizations might not know what an agent has access to, what it's doing, or who it's interacting with," Bose says. "That creates serious blind spots for security teams. And because many of these connections bypass traditional identity checks, there's often no easy way to revoke access if something goes wrong."For example, an AI agent acting as an executive's personal assistant might be granted permission to access the executive's Office 365 account, Outlook inbox and calendar, Word documents and OneDrive files. It could act on the executive's behalf by replying to simple email queries or adding appointments to the calendar.Should AI agents have that much autonomy and access without supervision or monitoring, given the inherent security risks? The agent could be tricked into sharing the executive's Office 365 credentials or personal details with an attacker, persuaded to set up phony meetings or send bogus emails to employees, or even become part of a business email compromise scam."If an employee sets up an integration between an AI agent and an app like Slack or Google Drive, it creates an app-to-app connection that lacks IT oversight," Bose explains. "This blind spot is created even if an employee's initial access to an AI tool is through SSO [single sign-on], because today’s identity standards delegate integration control to the user, not the admin."
An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news
You can skip this ad in 5 seconds