AI/ML, Threat Intelligence, Generative AI

OpenAI bans ChatGPT accounts linked to state-sponsored threat activity

“Our approach to AI safety” article seen in an OpenAI website on an iPhone screen. OpenAI is an US artificial intelligence (AI) research laboratory

The makers of ChatGPT said they have spotted and disrupted a number of state-sponsored operations that it believes were abusing the AI tool to create malware and run espionage campaigns.

OpenAI said in its June security report that it spotted and disrupted a number of attacks, most originating in China and Russia, that appear to have been using ChatGPT to either generate code or automate the process of making social media posts or emails for social engineering campaigns.

“AI investigations are an evolving discipline,” the OpenAI team said in its report.

“Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses.”

The report included a handful of case studies outlining the various ways in which it has seen threat actors use ChatGPT. Of the 10 selected cases, seven involved use of ChatGPT for social engineering, while another two involved code generation for malware operations.

North Korean IT worker operation disrupted

Also included in the report was the already-reported North Korean IT worker operation, in which ChatGPT was used by threat actors within the Hermit Kingdom to impersonate IT contractors from other locations.

In that case, it was found that the threat actors used ChatGPT to generate job applications and resumes. Additionally, it was found that the North Korean operators relied on the AI tool to communicate with their third-party partners in the scam.

“The core operators used ChatGPT as a research tool to help inform remote-work setups,” the OpenAI team explained.

“They also engaged our models to generate text concerning the recruitment of real people in the US to take delivery of company laptops, which would then be remotely accessed by the core threat actors or their contractors.”

China and Russia using ChatGPT to sow distrust on social media

By and large, the operators were found to be Chinese and Russian. Four of the 10 attacks were from China, while three more were from Russia. The remaining case studies originated in Iran, North Korea and the Philippines.

In one scheme, dubbed "Uncle Spam," Chinese threat actors were found to be using ChatGPT to play both sides of the fence on the issue of U.S. tariffs, generating social media comments both for and against the controversial Trump administration policy.

The threat actors went a step further by using ChatGPT to generate logos and images for phony groups on social networking sites, further working to sow conflict amongst the American public.

In each case, the ChatGPT team remedied the issue by issuing bans against the offending accounts, effectively shutting down the operation.

The OpenAI team noted that while the threat actors have been making use of AI for their attacks, the defenders have also been using it to their advantage. The OpenAI team revealed that ChatGPT and its learning models were used on their end to help identify and track the threat actors.

“By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams,” the company said.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Shaun Nichols

A career IT news journalist, Shaun has spent 17 years covering the industry with a specialty in the cybersecurity field.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds