Threat Intelligence, AI/ML

Report: Weaponized LLMs escalating cybersecurity risks

AI Sphere: A glowing red AI logo illuminates a futuristic, sci-f

Cybersecurity professionals are facing new threats as weaponized large language models customized for offensive operations become increasingly accessible and sophisticated. according to a report by VentureBeat.

Variants such as FraudGPT, GhostGPT, and DarkGPT are available on the dark web and platforms like Telegram for as little as $75 per month, offering capabilities for phishing, exploit development, code obfuscation, and more. These models resemble commercial software-as-a-service products, often including dashboards, APIs, and customer support, making them easy to deploy by cybercrime groups and nation-state actors. According to Ciscos State of AI Security Report, fine-tuned models are 22 times more likely to produce harmful outputs than base models, significantly expanding the attack surface. Tests across sectors like healthcare and legal revealed that fine-tuning weakens safety controls, with jailbreak attempts and malicious outputs increasing dramatically, by 2,200% in some cases. Cisco researchers also highlighted the vulnerability of open-source training data to poisoning attacks, which can be carried out for as little as $60 by manipulating datasets like LAION-400M. Additional threats include decomposition prompting, which enables models to leak copyrighted or sensitive content by bypassing guardrails.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds