AI/ML, Identity, IAM Technologies

AI-to-app connections are the new shadow IT: Why we need guardrails for autonomous agents

AI agent and generative artificial intelligence concept. Businessman using AI agents on screen, including chatbots, AI assistants, and data analytics tools on a laptop. LLM, Ai agentic workflows.

It's 10 a.m. Do you know what your AI agents are doing?

Odds are you don't. The rapid adoption of fully autonomous AI agents, which can interact with other applications and services, carry out tasks and make decisions on their own, is leaving human supervisors in the dark and creating a shadow IT of potentially rogue AI tools.

Unauthorized use of AI based on large language models is spinning out of control. Half of the legal and finance professionals who responded to a recent survey said they had already used unauthorized AI tools in the workplace. Another 23% said they hadn't yet but would.

At the same time, organizations are adopting two new protocols for aiding communication between applications and AI agents, aimed at giving the agents smoother access to third-party user accounts and information.

Yet the Model Context Protocol has optional security safeguards and can be exploited using common techniques. The Agent-to-Agent protocol has built-in security but can still be abused with malicious results. Neither protocol manages or monitors AI agent activity, and both let AI agents interact without human supervision.

"Unmanaged AI agents introduce a new class of security risk because they act autonomously, often with broad access to sensitive systems," says Arnab Bose, Chief Product Officer of the Okta Platform. "Without proper controls, these agents can unintentionally leak data, trigger actions across connected applications, or be exploited by threat actors through techniques like prompt injection or token hijacking."

All this irrational exuberance makes the AI boom feel like the dot-com boom of the late 1990s, when new networking and file-transfer protocols were deployed with scant consideration for security. The explosion of malware, phishing, ransomware attacks and other types of online cybercrime that followed was the direct result of those poor decisions.

Let's not make the same mistakes again. It's time to implement strict management of AI agents so that they can be monitored, logged, restricted and protected. Existing identity-security frameworks like the OAuth protocol can be expanded to do just that.

Little AIs, running wild

Just before the start of the 2025 RSAC conference, JPMorgan Chase CISO Patrick Opet startled the security industry when he posted "An Open Letter to Third-Party Suppliers" demanding better security from SaaS providers.

Many of Opet's warnings could also apply to AI developers and providers: "Software providers must prioritize security over rushing features. Comprehensive security should be built-in or enabled by default."

He also directly addressed AI security shortfalls with a hypothetical example.

"An AI-driven calendar optimization service integrating directly into corporate email systems through 'read-only roles' and 'authentication tokens' can no doubt boost productivity when functioning correctly," Opet wrote. "Yet, if compromised, this direct integration grants attackers unprecedented access to confidential data and critical internal communications."

Along similar lines, a recent Salesforce study found that nine different large language models developed by Open AI, Google and Meta failed to understand the importance of keeping confidential data private, demonstrating "near-zero confidentiality awareness" and "an inherent lack of prioritization or understanding of confidentiality protocols by LLM agents."

 AI agents have strayed out of bounds in real life. In April 2025, a customer-service bot logged out users of the AI-based code editor Cursor if they moved from one PC to another. The AI agent had hallucinated and implemented a nonexistent company policy that "required" separate accounts for each user machine.

We need to think of AI agents as non-human identities on steroids. They may have high privileges and access to proprietary data, yet they are rarely monitored or limited in the scope of their activities, and they fail to distinguish between public sources and proprietary or secret information.

They also are by nature non-deterministic. That's a fancy way of saying that we can never predict what an AI agent will do.

"Like other non-human identities (NHIs), AI agents often lack clear ownership and human oversight," Okta's Bose wrote in a recent blog post. "But unlike those other NHIs, AI agents behave autonomously, meaning they can act in ways security teams may not expect."

The MCP and A2A are not management tools

Things move so quickly in the AI world that the Model Context Protocol, or MCP, introduced by Anthropic in November 2024, was already being widely used six months later. The other major new open protocol, Agent2Agent or A2A, was unveiled by Google in April 2025.

MCP uses a server-client model to connect AI agents to existing applications, routing commands and requests through MCP servers to AI and application clients. A2A is peer-to-peer, permitting AI agents to talk directly to each other and collaborate on tasks.

In both protocols, AI agents advertise their own capabilities using "agent cards" so that MCP servers and other AI agents can select those best suited to the task at hand. And both protocols have security flaws that have not been completely addressed.

MCP is vulnerable to:

  • Tool poisoning in which secret commands, invisible to the human user, are hidden in the instructions given to AI agents
  • Command injection due to lack of proper authentication standards and integrity controls
  • Rug pulls, or when tools update their own code after installation and approval
  • Typosquatting, as the MCP permits different tools to have the same name
  • Cross-tool contamination in which different MCP servers connected to the same AI agent interfere with each others' operations
  • Permission reuse when permissions granted for one task are reused for more tasks without reauthorization
  • As for A2A, its flaws lie in the self-presented agent cards, which don't need to be entirely accurate. Rogue AI agents can exaggerate their capabilities on their cards to "win" tasks over other agents that are better qualified.

    While the MCP nor A2A protocols support the open OAuth authorization standard, neither actively manages or monitors AI agents, and security teams and users may be unaware of what's going on under the hood.

    "Organizations might not know what an agent has access to, what it's doing, or who it's interacting with," Bose says. "That creates serious blind spots for security teams. And because many of these connections bypass traditional identity checks, there's often no easy way to revoke access if something goes wrong."

    For example, an AI agent acting as an executive's personal assistant might be granted permission to access the executive's Office 365 account, Outlook inbox and calendar, Word documents and OneDrive files. It could act on the executive's behalf by replying to simple email queries or adding appointments to the calendar.

    Should AI agents have that much autonomy and access without supervision or monitoring, given the inherent security risks? The agent could be tricked into sharing the executive's Office 365 credentials or personal details with an attacker, persuaded to set up phony meetings or send bogus emails to employees, or even become part of a business email compromise scam.

    "If an employee sets up an integration between an AI agent and an app like Slack or Google Drive, it creates an app-to-app connection that lacks IT oversight," Bose explains. "This blind spot is created even if an employee's initial access to an AI tool is through SSO [single sign-on], because today’s identity standards delegate integration control to the user, not the admin."

    Cross App Access to the rescue

    To enable effective monitoring and management of AI agents, Okta plans to start using an extension of OAuth that should be ready later this year.

    Called Cross App Access or, more technically, Identity Assertion Authorization Grant, the extension will redirect AI agent requests for application access made through the MCP or A2A protocols to an organization's identity provider, which can then grant (or deny), track and log the request as part of its own systems.

    "With Cross App Access, organizations can define exactly which agents or applications are allowed to connect, what data they can access, and under what conditions," Bose says. "IT can centrally manage these connections, audit them, and revoke them instantly if needed."

    Cross App Access also streamlines the process of authorizing access to multiple applications at once.

    "Just like SSO simplifies and secures how users access multiple applications, Cross App Access brings that same centralized control, consistency, and visibility to AI agents as they interact across tools," says Bose. "It replaces today's risky patterns like embedded credentials, long-lived tokens, and user-managed connections with short-lived, scoped tokens issued and governed by identity policies."

    The extension will not be exclusive to the Okta Platform, and any modern identity-provision system should be able to use it as long as it supports OAuth.

    Even more importantly, Cross App Access and projects like it represent the first steps of properly securing and managing AI agents so that they remain under human control and within limits set by humans.

    To go back to Patrick Opet's open letter, he stated that with regard to SaaS applications, "we stand at a critical juncture."

    Again, his words could be applied to AI development, and to the promise offered by centralized management and monitoring of AI agents.

    "Providers must urgently reprioritize security," he wrote. "Customers should be afforded the benefit of secure-by-default configurations, transparency to risks, and management of the controls they need to operate safely. ... We need sophisticated authorization methods, advanced detection capabilities, and proactive measures to prevent the abuse of interconnected systems."

    An In-Depth Guide to AI

    Get essential knowledge and practical strategies to use AI to better your security program.
    Paul Wagenseil

    Paul Wagenseil is a custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

    Get daily email updates

    SC Media's daily must-read of the most current and pressing daily news

    By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

    You can skip this ad in 5 seconds