AI/ML, Generative AI, Threat Intelligence

AI’s brightest promise may be its biggest risk

Credit: Adobe Stock

A mix of AI hype, agentic AI over-promising and not-ready-for-primetime products are prompting leading voices in cybersecurity to sound alarm bells. They warn buying into today’s agentic AI can feel like investing in a self-driving car that turns out to be a go-kart with cruise control.

While organizations are sold on AI and agentic AI as representing the future of enterprise automation, the reality is tech and cybersecurity teams are left constantly supervising AI projects. Worse, a growing chorus of disparate experts warn that a rush to jump on the agentic-AI bandwagon is exposing companies to cyber risks.  

An SC Media examination of recent cybersecurity and AI reports from Gartner, Cisco Talos, Cobalt, Flashpoint and Kaspersky spotlight a corner of the agentic AI market where vendors are building AI solutions faster than they can precisely define what they do, how to secure them and lack clear governance.

The agentic AI tech hype, sell hard and deploy widely cycle, isn’t new. But AI — unlike cloud, mobile and IoT — is quickly being entrusted inside the SOC's inner-sanctums at breakneck speed. Eighty percent of global companies report adopting AI to improve their business operations, according to Edge Delta.

Consensus is, we are building the next generation of tech without securing the foundation or even defining what that foundation is. Understandably, fueling this rush are the very real productivity gains and security breakthroughs underlying AI and the pot of AI gold for vendors selling solutions.   

All that glitters is not agentic gold

Investors, vendors and enterprise buyers are racing into this market at full throttle. MarketsandMarkets projects that agentic AI will grow from $13.8 billion in 2025 to nearly $141 billion by 2032.

Gartner calls much of what it is seeing “agent-washing,” according to a report released last week. It estimated that out of the thousands of agentic AI vendors, only 130 are "real." 

According to Gartner, an agentic AI system is defined by its ability to operate with goal-directed autonomy. It must plan, act, and adapt in real time without human micromanagement. The problem is that most tools being sold today don’t come close, it argued.

Gartner said some vendors are effectively dressing up scripted bots with a polished interface and marketing them as intelligent agents. The report warns that companies are slapping the “agentic AI” label on everything from old RPA scripts to glorified macros.

“Most agentic AI projects right now are early stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” said Anushree Verma, senior director analyst, Gartner. “This can blind organizations to the real cost and complexity of deploying AI agents at scale, stalling projects from moving into production. They need to cut through the hype to make careful, strategic decisions about where and how they apply this emerging technology.”

Meanwhile, enterprise buyers are sold the idea that these tools can make intelligent, autonomous decisions, when in reality many AI bots still can’t even shepherd a help desk ticket along without human intervention.

For that reason, Gartner estimated that 40% of agentic AI projects will be canceled by 2027 due to implementation failures and inflated expectations. “Agentic AI projects are being driven by hype, not value,” Verma said.

Big dreams, bigger holes

MarketsandMarkets expected agentic AI adoption to expand fastest in IT service management and incident response because these workflows are high-volume, rules-based, and easy to automate. That makes them low-hanging fruit for AI experimentation and high-risk territory for anything that fails silently, critics warn.

Flashpoint and MarketsandMarkets both reported that enterprises are already integrating agentic AI into customer service workflows, decision support tools, and infrastructure automation.

"Amid rising pressure to 'use AI,' defenders are navigating a maze of assumptions, marketing promises, and misconceptions. The technology is moving fast, but so is the confusion around what it can (and can’t) do," Flashpoint said.

But according to Cobalt, these systems are often deployed without visibility into how decisions are made or validated. As Cobalt put it: “Visibility into how LLMs make decisions — and how those decisions could be exploited — is still largely missing from enterprise deployments.”

Cobalt’s 2025 State of LLM Application Security report found that 32% of tested LLM applications had serious security flaws, and only 21% of the flaws were remediated. The most common issues included prompt injection, model denial-of-service, and data leakage vulnerabilities.

GenAI flaws are fixed much less often than other types of flaws, such as API flaws, which are resolved more than 75% of the time, and cloud vulnerabilities, which are fixed in 68% of cases, cited SC Media reporting on the Cobalt report.

Developers “building in the dark” means without the security tooling or best practices to anticipate emergent behavior, according to Cobalt. One example highlighted by Cobalt is a healthcare chatbot that it said leaked sensitive patient data after being manipulated through prompt injection. This was caught only during manual human testing.

Criminal creativity outpaces enterprise caution

While defenders are still puzzling over governance models and safe deployment, attackers are improvising with jail broken and fine-tuned LLMs to scale fraud, phishing and malware development.

In a report released last week, Cisco Talos found that black-market tools such as WormGPT and FraudGPT are built on stripped-down versions of open-source models including LLaMA and GPT-J. These systems are repackaged to generate malicious code, write persuasive phishing emails and guide attackers in evading security measures.

Repackaging open-source models typically involves removing safeguards, retraining them on malicious data, or bundling them into plug-and-play tools on dark web forums and Telegram.

And the attacks are getting more advanced. Prompt injection attacks, where malicious inputs trick the model into acting outside intended parameters, have gone mainstream. Cisco called these Retrieval Augmented Generation (RAG) pipelines.

LLMs using RAG fetch real-time information from external sources to enhance their responses. For instance, if you ask about the weather on a specific day, the model queries a website to retrieve the latest forecast. However, if an attacker gains access to the data source, they could tamper with the information and alter the weather report or embedding hidden instructions to change the model’s response. Such manipulation could mislead users or even target individuals with customized misinformation.  

Cisco said prompt injection and RAG attacks aren't just a novelty attacks, they have become operationalized. "The threat surface is expanding faster than the defensive playbook,” Cisco said.

While these scenarios are less about agentic AI washing, they do play into the larger AI gold-rush narrative impacting enterprise and shadow AI threats security teams must contend with.

Defenders, meanwhile, are being asked to both adopt and secure these tools. This leads to what many call a “hype fog,” where decision-makers struggle to separate innovation and unsubstantiated buzz from risk. The term is meant to connote a billboard shrouded in dense fog — message visible, but details obscured.

AI manipulation bazaar

In its latest threat intelligence research, Flashpoint chronicled the rise of deepfake-as-a-service marketplaces, fraud-focused LLMs for sale on the dark web, and purpose-built tools to automate identity theft, impersonation, and misinformation. One deepfake-as-a-service kit highlighted by the firm specialized in "custom face generation," voice impersonation and synthetic video.

"These offerings are designed to fool verification systems used by financial institutions and other regulated industries," Flashpoint said.

Flashpoint's approach to integrating AI into its platform decidedly in partnership with "human expertise." Defenders aren’t helpless, only overwhelmed, it noted.

"Transparency, oversight, and expert interpretation aren’t optional; they’re built into our design.
Because in critical missions, AI needs to empower people, not distract them," it maintained. Flashpoint doesn’t promote autonomous AI defenses, rather a fusion of machine-scale monitoring with human analyst insight.

Flashpoint's down-to-earth antidote for fog-hype complimented Gartner’s warning over agent-washing where the current marketplace appears to value buzzwords over functionality. Both suggested the disconnect between promises versus reality makes it easier for bad actors to thrive and harder for CISOs to evaluate real value.

Trust it, install it, regret it

Kaspersky’s threat report showed how the AI buzz is being used as bait and is disproportionately impacting small- and medium-sized businesses (SMB). Often without dedicated security staff, SMBs are most vulnerable to deceptive downloads. Users see the word “AI,” associate it with innovation, and click.

In 2024, researchers detected more than 300,000 malicious installers disguised as popular collaboration tools and AI brands. These files were distributed via phishing campaigns, third-party software repositories and social media ads. While some were named after real tools like Zoom or Teams, many mimicked ChatGPT or AI-enhanced utilities to gain legitimacy.

“The branding of AI is now a vector,” Kaspersky wrote, meaning that the appearance of intelligence in a tool, platform or download is enough to lower user defenses.

For example, one malware campaign disguised a credential-stealing trojan as a “ChatGPT Desktop Assistant.” The installer’s branding and interface looked legitimate, but it quietly exfiltrated browser-stored passwords.

AI security tools have their own problems

Ironically, one of the fastest-growing segments of the AI market are the very tools designed to secure it. The AI in security tools market was worth $25 billion in 2024 and jumps to $94 billion in 2030, according to Grand View Research.

But these tools come with caveats. LLM-based SOC assistants are still prone to hallucinations. Many model-monitoring solutions offer limited explainability. And across the board, there’s minimal consensus on how to audit agentic behavior in high-stakes environments. These insights were drawn from both Cobalt’s and Flashpoint’s research.

A fragile future, branded as progress

Gartner, Flashpoint, Cobalt and Cisco all converge on the same warning: agentic AI is being deployed faster than it is being understood. There’s no standard definition of what qualifies as an agent. No agreed-upon methods to test one. And little transparency about how these systems function under pressure.

Gartner's take: The path to agentic AI is not wrong, but it’s incomplete. Without a foundation in secure, verifiable execution, early efforts are likely to over promise and under deliver.

Cobalt echoed the same sentiment: AI will not collapse under its own weight, but poorly secured deployments will.

As multiple independent and recent reports suggest, the rush to adopt agentic AI is far outpacing our collective understanding of its limitations. Without a shared framework for transparency, accountability, and security, what appears to be intelligent autonomy may in fact be a fragile facade.

Until these gaps are addressed, what we’re really doing is racing in a high-speed go-kart, dressed up as a Tesla, with no clear idea who’s steering.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Tom Spring, Editorial Director

Tom Spring is Editorial Director for SC Media and is based in Boston, MA. For two decades he has worked at national publications in the leadership roles of publisher at Threatpost, executive news editor PCWorld/Macworld and technical editor at CRN. He is a seasoned cybersecurity reporter, editor and storyteller that aims always for truth and clarity.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds