On September 8, 2025, one of the largest npm supply chain incidents in recent history unfolded. Popular libraries, such as debug and chalk, along with 16 other utilities, were hijacked and pushed to npm with malicious code... These packages collectively have billions of weekly downloads. Two weeks later, the ongoing npm supply chain incident, codenamed Shai-Hulud attack... [has] impacted no less than 500 npm packages... Once infected by Shai-Hulud, npm packages spawn attacks of their own by unknowingly allowing the worm to self-propagate through the packages they maintain.
Note: The compromised "chalk" package referenced above is the npm color library, not the Chalk project we maintain at chalkproject.io—completely different projects.
Just last month. Two separate attacks. Billions of weekly downloads compromised. A self-replicating worm spreading through the npm ecosystem. And this is with everyone running SCA tools.
We need to talk about Software Composition Analysis. Not because it doesn't work—it does—but because we've collectively convinced ourselves that if we just scan harder, alert better, and integrate deeper, we'll somehow get on top of the open source vulnerability problem.
We won't. We can't. The maths don't work.
I've been in this industry long enough to watch the same patterns repeat. In the early 2000s, we thought static analysis would solve AppSec. Then it was web application firewalls. Then it was developer training. Now it's SCA tools. Each time, the industry rushes to adopt the new solution, vendors make a fortune, and a few years later, we find ourselves standing in the same place, wondering why the problem is worse than before.
So let's be honest about what's actually happening with open source security, why SCA tools—despite being genuinely useful—can't fix it, and what we should actually be doing instead.
The Numbers (With a Massive Grain of Salt)
The numbers paint a concerning picture—though we should note that many of these statistics come from SCA vendors like Black Duck and Synopsys, who have a vested interest in highlighting the problem they're selling solutions for. That said, even accounting for potential FUD, the trend is undeniable.
According to Vulert, in 2024, over 29,000 vulnerabilities were identified across open source components—a 30% increase from the year prior. That's more than 80 new vulnerabilities discovered every single day. In npm alone, the number jumped significantly year over year. According to Black Duck's 2025 OSSRA report, 86% of commercial codebases contain open source vulnerabilities, with 81% harboring high or critical-risk flaws.
Take these specific percentages with appropriate skepticism—they're from vendor audits of customers who likely sought them out due to security concerns. But even if the real numbers are half that, we're still talking about a massive, systemic problem. And the directional trend is clear: things are getting worse, not better.
Black Duck also reported that modern applications now average 911 open source dependencies, and the typical application in 2024 contained over 16,000 open source files, up from 5,300 in 2020. Again, vendor statistics should be viewed critically—they're measuring organizations concerned enough about security to commission audits. But the underlying reality is harder to dispute: applications have become vastly more complex, with each dependency representing a potential attack surface and connecting to its own web of transitive dependencies that organizations often don't even know exist.
This isn't just a temporary spike. It's a fundamental trajectory problem.
Why The SCA Market is Booming (And Why That Doesn't Help)
The SCA industry has responded predictably. According to market analysts at Straits Research—I've never heard of them, but they're probably like the Ponemon Institute, widely viewed as organizations that will say whatever you want for a fee and whose work holds no credibility among people I consider important—the market is projected to grow from $394 million in 2025 to $1.68 billion by 2033.
Take that with appropriate skepticism, but the directional trend is real: vendors are adding AI-powered features, better detection, automated remediation, and integration at every stage of the development lifecycle. Investment is flowing in, features are multiplying, marketing is everywhere, and the problem keeps getting worse.
SCA Tools Aren’t Solving the Scaling Problem
Here's the uncomfortable truth: better tools won't fix a maths problem. Even if every organization deployed the best SCA tools tomorrow, they'd still face an impossible task. Security teams simply cannot review, prioritize, and remediate vulnerabilities as fast as they're discovered. In npm alone, vulnerabilities jumped from 8,930 to 10,589 in a single year. Security teams simply cannot review, prioritize, and remediate vulnerabilities as fast as they're discovered.
The situation becomes even more untenable when you consider vendor claims that 90% of open source components are already four or more years out of date. Whether that specific figure holds across the broader industry or represents a worst-case scenario, the pattern is clear: catching up isn't an option when you're this far behind, and the gap widens daily.
Now overlay generative AI. Development teams are shipping code faster than ever, with AI assistants churning out implementations that pull in even more dependencies. We're not just running to stand still; we're sprinting backwards while insisting we're making progress.
The False Promise of Comprehensive Coverage
The SCA vendor pitch is seductive: comprehensive visibility into your software supply chain will enable you to secure everything. They'll generate SBOMs, track every dependency, monitor for vulnerabilities 24/7, and alert you the moment new risks emerge.
It sounds perfect. It's also completely unsustainable.
organizations deploying these tools quickly discover they're overwhelmed by alerts. When a significant portion of your codebase has high-severity vulnerabilities and you're receiving notifications for hundreds or thousands of potential issues, what do you do? Teams either become numb to the constant flood of warnings or spend all their time triaging and none actually fixing problems.
The research reveals this failure mode: codebases commonly contain components that are 10 or more versions out of date, with many including components that have had no development activity for over two years. While these statistics come from vendor studies with inherent biases, they align with what security practitioners see in the field. These aren't oversights; they're the inevitable result of teams being buried under an unmanageable workload.
Modern advances, such as reachability analysis from Semgrep, do indeed make a dent in this problem by filtering out vulnerabilities in code paths that aren't actually executed. This is genuine progress and significantly reduces false positives. But even with this state-of-the-art technology, the maths still don't work. You're still left with more validated, genuinely exploitable vulnerabilities than your team can possibly address. Better filtering just means you're drowning in a smaller but deeper pool.
We've seen this movie before. It's the same pattern that killed the ROI for SAST tools in the mid-2000s. Teams got buried under findings, couldn't keep up, and eventually just turned off the noise.
Back to First Principles: Risk Management
This is where we need to remember what security has always actually been about: managing risk, not eliminating it.
Classical information security risk management has always taught us to focus resources on what matters most. You identify your crown jewels, understand your threat model, and protect what's actually valuable. You accept that you can't secure everything, so you don't try.
Somehow, with the advent of SCA tools, we forgot this fundamental wisdom. We started believing that we could—and should—fix every vulnerability just because we can flip an easy button on the repo or in the CI/CD pipeline. The ease of deployment made us forget to ask whether we should deploy everywhere, for everything. We chased the fantasy of comprehensive security rather than effective security.
It's time to return to reality.
Scope-Limited Security: Caring About What Actually Matters
Instead of trying to boil the ocean, organizations need to radically narrow their focus to what genuinely matters. This means asking fundamentally different questions, which tools today can't answer, though this might be a great use case for AI in the near future.
For what it's worth, I've been working on a tool called OpenERM that's unapologetically vibe-coded and attempts to tackle some of these questions. It does an analysis of code to help determine risk, not just vulnerabilities—including things like data classification on structured data to identify what's actually handling PII and other sensitive information. Early days, but it's the kind of direction the industry needs to be heading.
What code actually handles sensitive data or critical business functions?
Most of an application doesn't. Identify the 20% of your codebase that represents 80% of your actual risk. Your payment processing? Customer authentication? These matter. Your internal developer wiki? Probably not so much.
Which vulnerabilities are actually reachable and exploitable in your specific implementation?
Modern SCA tools have started addressing this with reachability analysis, identifying whether vulnerable code paths are actually called in your application. This provides valuable code context and is genuine progress. However, modern applications and systems are built with APIs, cloud services, authentication layers, and network boundaries. Without the context of the entire system architecture, even a "reachable" critical vulnerability as defined by an SCA tool may never have a reachable path from an actual attacker.
Code-level reachability tells you if a function can be called, but it doesn't tell you if an attacker can actually get to it. Issues found only in code remain theoretical until you understand the context of authentication, network topology, API gateways, and all the other bits that make a cloud-native app actually work in production. Reachability analysis is a step forward, but without system context, you're still just guessing at actual risk.
What's your actual threat model?
A cross-site scripting vulnerability in an internal admin tool with ten authenticated users is not equivalent to one in your public-facing payment system with millions of users. Context matters more than CVSS scores.
What could actually hurt your business?
Not all "critical" vulnerabilities are equally critical to your organisation. A supply chain attack that could leak customer data deserves attention. A theoretical denial-of-service vector in a non-critical internal tool probably doesn't.
Which systems are actually exposed to attack?
Internet-facing services are in a different threat category than internal tools sitting behind a VPN that require authentication. Modern SCA tools have started addressing reachability analysis, which helps identify whether vulnerable code paths are actually called in your application. This is a step in the right direction, but it's still focused on comprehensive analysis rather than helping teams scope what they care about. Even with reachability filtering, organizations still face thousands of findings across their entire portfolio.
This isn't about ignoring security. It's about practicing intelligent risk-based security instead of checkbox compliance security—what we’ve previously called moving from protection to observability.
What This Looks Like in Practice
What does scope limiting actually look like when you're not writing a blog post?
Start by classifying your applications and services by business criticality. Your payment processing system, customer data stores, and authentication services are in a different category than your internal wiki or staging environments. They deserve different levels of security attention and different security budgets.
For critical systems, focus on:
- Direct dependencies you actively use, not the entire transitive dependency tree
- Components that handle sensitive data or authentication
- Internet-facing surfaces versus internal-only functionality
- Vulnerabilities with actual exploit paths in your specific implementation
For lower-priority systems, accept higher risk. Use older libraries if they're stable. Don't patch vulnerabilities that aren't exploitable in your context. Focus your limited security resources where they actually matter.
This requires saying no. It requires having conversations with executives about acceptable risk. It requires security teams to stop being order-takers who promise to fix everything and start being risk advisors who help the organisation make informed trade-offs.
This is uncomfortable. It's much easier to hide behind a policy that says "all critical vulnerabilities must be fixed within 30 days" than to have an adult conversation about where to allocate finite resources. But uncomfortable is better than ineffective.
GenAI is Making Everything Worse
Here's where things get really interesting. Generative AI is pouring petrol on this fire.
Development teams using AI coding assistants are producing code faster than ever, often without fully understanding what they're importing. These tools eagerly pull in dependencies to solve problems, frequently choosing whatever package seems to work without considering its security posture or maintenance status.
A developer using an AI assistant might generate a complete feature in minutes, pulling in five new dependencies in the process. Those dependencies bring their own dependencies, and suddenly your application has grown by dozens of components you've never audited. Multiply this by every developer on every team across the entire organisation, and you can see why the average application now has 16,000 open source files.
The SCA tools will dutifully flag every vulnerability in every one of those new dependencies. And your security team will fall even further behind.
The scope-limited approach becomes even more critical in an AI-accelerated development world. You simply cannot security-review everything that gets created. You need clear boundaries around what must be reviewed and what level of risk is acceptable for different system categories.
The Industry Must Evolve
I want to be clear: SCA tools are genuinely useful. They solve real problems. Modern tools with reachability analysis and better prioritization are improvements over the first generation. But the industry needs to evolve beyond the "scan everything, report everything" model.
The next generation of tools should help organizations practise intelligent scope limiting:
- Risk scoring that accounts for business context, not just theoretical CVSS scores
- Integration with asset management to understand what actually matters
- Code auditing capabilities that identify what technologies are being used, what data is being processed (PII, payment data, etc.), and other contextual factors that help map vulnerabilities to actual developer standards and business risk
I've heard some vendors are talking about moving in this direction. But the industry overall is still selling the fantasy of comprehensive security rather than acknowledging the reality of constrained resources and infinite vulnerabilities. That needs to change, but it won't change until buyers stop asking for it.
To be honest, I'm not aware of any commercial tools that actually do this well today. I think this represents not just a new category of security tools, but a whole new category of software engineering tools—ones that understand the business context, system architecture, and actual risk profile of applications rather than just cataloging technical vulnerabilities. It's early days, but this is where the industry needs to head.
Choose Your Battles
The trajectory is clear: vulnerabilities are increasing, code production is accelerating with AI, and dependencies are multiplying. This trend won't reverse. Even with continued investment in SCA tools and security automation, organizations will never "catch up" to achieving zero vulnerabilities across their entire software supply chain.
That's fine. We don't need to.
What we need is to return to the fundamental principles of security risk management. Identify what matters, focus resources there, and accept calculated risk elsewhere. The alternative—pretending we can and should secure everything—is a fantasy that's wasting massive resources while providing a false sense of security.
The organizations that will succeed in this environment aren't those with the most comprehensive SCA coverage. They're the ones that clearly understand their crown jewels, intelligently scope what they care about, and have the discipline to say no to security theatre.
It's time to stop pretending that more tools will solve a maths problem. It's time to care about less and actually protect what matters.
As always, you can keep up with our thoughts on software security by signing up for our newsletter.