Application security, Threat Management, Threat Management, Risk Identification/Classification/Mitigation

AI Arms Race: The cat-and-mouse game with no obvious winner

Share

The lure of artificial intelligence (AI) captures everyone’s imagination with its potential to improve different areas of people’s lives. The industries that are already getting a lot of mileage out of this technology run the gamut from e-commerce, transportation, and home automation – to web search, machine translation, and customer experience (CX). And yes, AI offers great promise for cybersecurity.

Protection systems based on machine learning algorithms outstrip traditional security mechanisms by detecting breaches, frauds, and malware raids with an unprecedented speed and success rate. These progressive solutions can quickly traverse and analyze huge amounts of data to identify patterns that match or resemble known attack vectors. Most importantly, next-generation defense instruments enrich their data models autonomously and thereby hone the precious ability to pinpoint emerging threats, zero-days, and phishing emails no other tools can catch.

AI has become a double-edged sword, though. The anticipation that it will finally offer white hats a game-changing advantage over crooks appears premature. Malicious actors are mastering the technology to step up their genre as well. It lets bots mimic human behavior better, underlies highly effective social engineering campaigns, and plays a role in creating predatory code that flies under the radar.

The genie is out of the bottle, and he’s here to stay. That being said, security professionals and cybercriminals are waging a novel war and competing to harness AI more efficiently. Let’s zoom into what’s happening on both sides of the fence and try to figure out who’s doing a better job.

Benign uses of AI in cybersecurity

AI can make a real difference with the early detection of cyberattacks. This proactivity capitalizes on two branches that complement each other: machine learning and deep learning. Security pros can leverage the former to spot patterns across huge data sets and pinpoint anomalies that may denote malicious activity. It also enhances behavior analytics, a process that singles out attributes of proper user interaction with electronic systems to identify inconsistencies. This can thwart fraud and boost the vigilance of the end-user.

Deep learning algorithms, in their turn, yield exceptional results in speech and image recognition. Here’s the silver bullet when it comes to detecting spear-phishing attacks, business email compromise (BEC), and other scams that impersonate trusted individuals or organizations. Deep learning also minimizes errors in biometric authentication systems, thus adding an extra layer of protection against account takeover and unauthorized access to various facilities.

AI systems also offer a shortcut to performing digital asset inventory. This procedure helps organizations stay abreast of all devices, applications, and users that have different levels of access to the enterprise network. AI can discover weak links in the security posture and predict where and how companies are compromised. This helps IT teams prioritize the defenses.

By using these advancements for detection and prevention security experts can focus on incident response. However, there’s a nontrivial challenge. While machine and deep learning algorithms are amazingly good at finding deviations from the norm, they aren’t as adept at extracting new patterns out of data logs on their own. Ultimately, the results are only as good as the AI models used for training.

Threat actors have also jumped on the hype train

Using automated bots to mimick real users has become one of the most lucrative monetization schemes on the dark web. From solving CAPTCHAs and generating spam to posting fake reviews and boosting the subscriber count on social network profiles, it’s a classic approach of foul play in a hacker’s repertoire. On the plus side, online services have learned to identify most of these frauds in a snap. But AI also makes it much easier for cybercriminals to imitate human activity.

For instance, bots that use machine learning can dupe anti-fraud algorithms of social networks to create new “sock puppet” accounts and maintain realistic activity on them for years. They can also perform open-source intelligence (OSINT) to collect publicly available data about users, determine their pain points, and orchestrate hyper-targeted scams. To top it off, AI enables malefactors to automate correspondence with would-be victims by predicting probable replies and choosing the most effective communication strategy without human involvement.

One more example of impactful abuse boils down to gaming Know Your Customer (KYC) workflows leveraged by banks to verify their clients. Forging identification data such as photos, fingerprints, or even voice samples is no big deal when AI kicks in. The pandemic has caused some financial institutions to add video conferencing tools to their customer interaction mechanisms. The controversial deepfake technology lets fraudsters manipulate these checks as well. On a side note, nation-states can also use it to conduct politically-flavored misinformation campaigns on a large scale.

Threat actors can also modify data sets that form the basis of deep learning systems. This data poisoning introduces rogue AI training models that skew the big picture and hamper threat detection. Malware authors can also piggyback on the technology by creating environment-aware malicious code that adjusts its activities to the peculiarities of the target network. The extensive mutation capability has become a big stumbling block for detection features of traditional security tools.

What can security teams do to tip the balance?

AI has raised the stakes in the confrontation between security pros and cybercriminals. On the one hand, it helps bad guys manipulate protection tools. On the other hand, it allows good guys to detect this exploitation more efficiently. This looks like a win-win situation for both parties so far, and that’s exactly what needs to change.

Hackers can also misuse AI because the technology is democratizing: many machine learning and deep learning models are open source. Some experts believe that governments should supervise and restrict the use of this technology across industries. However, this tactic will most likely prevent further evolution of AI rather than discourage crooks from exploiting it.

Instead, white hats need to allocate more resources and efforts to safeguard the integrity of training data used by intelligent systems. The industry needs to enhance AI’s capability to go beyond existing threat models and identify new patterns in this data. Security teams will have to go through a lot of trial and error along the way, but the results will define the state of cybersecurity in years to come.

David Balaban, owner, Privacy-PC.com

AI Arms Race: The cat-and-mouse game with no obvious winner

The U.S. Federal Trade Commission released guidance on AI in April. Today’s columnist, David Balaban of Privacy-PC, says government regulation could likely hold back the technology’s progress.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.