Vulnerability Management, Patch/Configuration Management

Why vulnerability scanning and patching alone no longer work

(Adobe Stock)

COMMENTARY: George Kurtz, founder and CEO of CrowdStrike, has been credited with inventing vulnerability management. In the more than 20 years since the term was coined and the category created, the practice has come to consume a considerable amount of time and budget for security teams.

Despite both the discipline and the tooling maturing considerably, defenders still struggle to manage vulnerabilities by most objective measures.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Indeed, according to the 2025 Verizon Data Breach Investigations Report (DBIR), 20% of the nearly 10,000 breaches in their analysis were the result of vulnerability exploitation — putting vulnerabilities on par with credential abuse and ahead of phishing in terms of initial access vectors. Mandiant also found exploitation the primary initial access method in one-third of its incident response engagements, making it the leading vector.

Given this landscape, defenders must now look for more and more vulnerabilities, even as there’s less and less clarity about which to remediate and how. On top of all of this, attackers can now exploit at breakneck speed — not in days or weeks, but within minutes after a vulnerability gets published.  More than a quarter of the Known Exploited Vulnerabilities (KEVs) documented in Q1 2025 have had exploitation evidence disclosed less than 1-day of a vulnerability being published. Punctuating this concern, AI has both accelerated and commoditized the development of effective exploits to anyone with a web browser. We are literally witnessing multiple rising tides breaking the scan/patch dam.

The overwhelming growth of CVEs

If software sustains businesses like food sustains lives, then we can compare vulnerabilities to food borne illness — and today, cases are growing an average of 22% per year. Over 40,000 CVEs were disclosed in 2024 alone and even conservative projections suggest that we are on pace for nearly 50,000 CVEs in 2025. The sheer volume represents a problem unto itself, but there are other events and circumstances that make sifting through CVEs increasingly difficult.

In February 2024, the Linux Kernel project became a CVE Numbering Authority (CNA) and the maintainers’ policy on CVE assignment has raised concerns about whether it’s possible to meaningfully identify and prioritize Linux vulnerabilities — critically important given its pivotal role in digital infrastructure ranging from the cloud to embedded systems. This issue has been exacerbated by the ongoing vulnerability enrichment backlog for the National Vulnerability Database (NVD).

And, of course, the CVE program itself was in jeopardy of shutting down altogether, causing multiple CVE “forks” in an effort to mitigate the potential damage. All of these factors contribute to increased difficulty determining exactly what vulnerabilities to address, but they also create long-term challenges for the overall practice of vulnerability management.

Attackers don’t care about scores

There are already plenty of methodologies for prioritizing vulnerabilities and a corresponding level of debate about their efficacy. Although the factors that feed the various vulnerability scoring systems are constantly being refined, the scores they produce are probabilities of varying qualities and so the decisions based on them are — ultimately — bets.

While it’s possible to make safer bets, adversaries have their own hands to play. Consider the recent example of CVE-2025-24054. It has a “moderate” base Common Vulnerability Scoring System (CVSS) score of 6.5, below the arbitrary threshold of 7 many organizations use as the cutoff for high-severity vulnerabilities. Microsoft’s own assessment was that the vulnerability was “less likely to be exploited.”

Even so, evidence of exploitation was uncovered just over a week after being disclosed. Conversely, data in the DBIR shows that the median-time-to-remediate known-exploited vulnerabilities is 38 days. The fundamental issue: we can prioritize remediation using the best available information, but still lack sufficient accuracy and take too long to be effective.

Concern about the use of AI for vulnerability discovery and exploit development has been growing, expanding upon a bitter lesson adversaries already understand: modern fuzzers can be scaled both vertically and horizontally, so discovery of new vulnerabilities represents a simple function of available compute power. AI adds gasoline to this fire, making the process exponentially more efficient and easy, for skilled and unskilled attackers alike.

It bears repeating that adversaries don’t submit CVEs. So while many new vulnerabilities are discovered using modern techniques, they won’t become CVEs unless and until there’s evidence of exploitation or the same vulnerability gets reported by a security researcher.

Patches are not so durable

In June 2022, Google researcher Maddie Stone published a set of root cause analyses of zero-day exploits in the wild. The research demonstrated that 50% of the zero-day exploits in the first half of the year were variations of previously identified and patched vulnerabilities: nearly one-quarter were variations from the previous year.

This illustrates an interesting half-life for vulnerabilities and — although there’s no similar analysis for recent years — when paired with publicly available exploit data, patterns of attackers revisiting the same attack surfaces time-after-time appear.

There’s no question that vulnerability scanning and patch management remain necessary, but are clearly no longer sufficient and are at or near a point of diminishing marginal returns. The numbers demonstrate that scanning and patching does not represent a viable path to breach prevention.

Moving forward, we must focus on both short- and long-term efforts to reduce the exploitability of our digital infrastructure, including more aggressive adoption of Secure-by-Design principles, as well as new and better approaches to runtime security.

Bob Tinker, co-founder and CEO, BlueRock Security

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds