AI/ML, AI benefits/risks, Threat Intelligence

Agentic AI used by threat actors to turbocharge cyberattacks

Data Poisoning is when someone intentionally messes with data to make it unreliable or incorrect. This can be done to cause harm or damage.

The rise of agentic AI systems is already having dramatic repercussions on the cybersecurity threat landscape as advances are allowing for rapid automation of cyberattacks.

The team with Palo Alto Networks Unit 42 said that its researchers spotted multiple instances in which threat actors are employing artificial intelligence (AI) platforms to make their attacks for more numerous, effective and difficult to catch.

“A significant evolution is the emergence of agentic AI — autonomous systems capable of making decisions, learning from outcomes, problem solving and iteratively improving their performance without human intervention,” wrote Unit 42 researcher Sam Rubin.

“These systems have the potential to independently execute multistep operations, from identifying targets to adapting tactics midattack. This makes them especially dangerous.”

How threat actors use agentic AI to their advantage

According to the Unit 42 researchers, there are multiple ways in which the threat actors have been employing agentic AI tools.

In some cases, attackers are using AI to rapidly speed up the process of infiltrating and exfiltrating data. Unit 42 estimated that between 2021 and 2024 the mean time needed to exfiltrate data dropped from a span of nine days to just two days. In 20% of observed cases, the threat actors needed less than one hour to go from initial infiltration to completed exfiltration of the target’s data.

Ransomware negotiations is one of the more interesting ways agentic AI is being used. The team found that some cybercrime groups are taking advantage of AI translation tools to better communicate with their victims when trying to extract a better price to prevent data disclosure.

The cybercriminals have also used agentic AI tools as a way to pull credentials. The researchers found that some threat actors will use AI assistants to spot and gather logins from within a compromised network, saving them significant time on reconnaissance.

Not to be outdone, AI-enabled deepfake technology has been known to play a part in attacks. In addition to the previously documented attacks performed by North Korean threat actors seeking to embed themselves as IT contractors, deepfakes have been spotted in use by attackers who impersonate employees to pull off helpdesk scams.

New kid on the threat block: Agentic AI

The Unit42 findings in many ways match those of research from the CyberRisk Alliance, which found that many security professionals see the use of AI technologies by threat actors as a primary threat going forward. The fear is that AI tools are allowing threat actors to make their attacks more efficient and effective.

On the other hand, there remains optimism in the industry that defenders will also make use of agentic AI on their end. By assigning AI systems to perform tasks such as monitoring network traffic and analyzing specific activity, it is hoped that administrators and security professionals will be better able to spot and analyze specific threats and attacks.

“When dealing with AI-enhanced cyberthreats, it's important to recognize that we are still at a point where AI serves as an amplifier for traditional attack techniques rather than fundamentally altering them,” Rubin said.

“While the frequency and ease of executing certain attacks may increase, the foundational strategies for effective detection and response still hold strong.”

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds