Security leaders are heading into 2026 with a growing concern that AI will reshape risk faster than most organizations can govern it.Experts who reached out to SC Media predicted that AI will disrupt vulnerability management and security testing, forcing teams to adopt AI-driven scanning and predictive analytics to keep pace. Others anticipate the first major breaches tied directly to AI adoption, as agentic workflows expand faster than security tooling can mature.At the same time, ransomware extortion may shift toward data-leak pressure and regulatory uncertainty, fueling broader debate over what companies can ethically, legally, or insurably pay. In response, resilience may depend on pairing automation with human judgment.Cybersecurity leaders and professionals submitted the following forecasts for 2026.
How AI adoption will change the cybersecurity industry
AI adoption will disrupt vulnerability and security testing, says Dipto Chakravarty, chief product officer at Black Duck:
The traditional approach to vulnerability management and security testing will certainly be disrupted, primarily driven by the increasing adoption of AI in cybersecurity. The old software world is gone, giving way to a new set of truths defined by AI. AI will significantly alter how organizations identify and mitigate vulnerabilities, becoming both a tool for attackers and defenders. Threat actors will leverage AI to automate and scale attacks, while defenders will use AI to enhance detection and response capabilities.Organizations will need to invest in AI-driven vulnerability scanning and predictive analytics to stay ahead of emerging threats. AI-powered security tools will enable security teams to analyze vast amounts of data, identify patterns, and predict potential threats before they materialize.The role of AI in AppSec will be transformative, and organizations that fail to adapt to risks being left behind. As AI continues to evolve, it's essential for security leaders to prioritize AI-driven security measures and invest in the necessary skills and technologies to stay.
Expect breaches tied to AI adoption, says Chris Wheeler, CISO at Resilience:
2026 will be the year we see the first meaningful breaches tied directly to AI: not attacks assisted by AI, but incidents that exploit AI adoption, which has accelerated due to organic initiatives and vendor integration. Security tooling to protect these workflows is either in its infancy or prohibitively expensive, which creates opportunity for mistakes and misuse, especially downmarket.
Human judgment will work in tandem with AI, says Dave Spencer, director of technical product management at Immersive:
As conversations about automating threat hunting intensify, it’s clear that technology alone won’t define resilience. Signature-based detection still has its place, but attack methodologies evolve too quickly for static indicators to keep up. The best teams hunt for behavior and intent, not alerts. While AI may excel at spotting patterns, human judgment will remain the deciding factor.This is especially true when securing critical infrastructure, where uptime equals safety. Full automation isn’t resilience. It’s a risk. Automatically isolating a laptop is one thing; disconnecting a mission-critical system is another.Recent attacks on zero trust architectures have underscored this tension. Even the most “secure” designs can be subverted when adversaries log in rather than break in. This shift will demand AI-driven pattern detection to spot subtle, credential-based threats that humans alone can’t process fast enough. But it also demands proof that automation will act safely and effectively when it matters most.True resilience will come from neither technology nor people alone, but from proving that both can respond together under pressure, with confidence earned through evidence, not assumption.
In 2026, we will see the proliferation of “vibe hacking” from cyber criminals, says Ryan Fetterman, senior security strategist at Splunk:
The operation tracked by Anthropic as GTG-2002 and characterized as “vibe-hacking” revealed the extent to which skilled operators can scale attack operations, using AI. In this series of incidents, Anthropic's Claude models were used to automate every stage of an attack targeting 17 organizations across the government, healthcare, and emergency sectors — from reconnaissance, to malware development, to data theft and extortion — and even with stricter guardrails and detection, attackers can switch to unguarded or privately hosted open-weight models. Both the success of GTG-2002, and the democratization of powerful AI, suggest vibe-hacking will not be something we can put back in the box, and will only continue to evolve as AI becomes more sophisticated and accessible in the coming year.
Salesforce sets tone for ransom payments
It's unclear what companies get in return for paying ransoms, says Ann Irvine, chief data and analytics officer at Resilience:
Salesforce’s public refusal to pay a ransom demand will set the tone for a larger reckoning around extortion and ransom payments. We’re already seeing threat actors skip encryption entirely and demand money just to avoid leaking stolen data. It’s creating a strange grey zone where it’s unclear what a company is paying for or how it fits under existing regulations. Expect a much louder debate in 2026 about what’s ethical, what’s legal, and what’s insurable.
An In-Depth Guide to AI
Get essential knowledge and practical strategies to use AI to better your security program.
Stephen Weigand is managing editor and production manager for SC Media. He has worked for news media in Washington, D.C., covering military and defense issues, as well as federal IT. He is based in the Seattle area.
Linwei Ding, 38, was found guilty on 14 counts, including economic espionage and trade secret theft, for illicitly acquiring confidential data concerning Google's Tensor Processing Units (TPUs), Graphics Processing Units (GPUs), and SmartNIC network interface cards.