This article is cross posted here on LinkedIn for discussion and comments.
In the CSO interviews, I explained that we heard from CSO’s that they want less tools not more. I also explained that we heard they are drowning in noise and alerts. The two are related but not 100%. It’s also due to budget, operational overhead and things like that. The ideal tool is one that enables them to retire other tools, and reduce the workload of their over-burdened staff.
Yesterday I published an article describing Why DevSecOps is better than appsec and why DevSecOps tools will never be enough, in which I make the case that the shift to the cloud, the explosion of languages, frameworks, SDK’s, API’s and libraries, as well as the devops technology itself, has resulted in a significant gap in coverage and that given the way technology is going that is only going to get worse.
Those two things contradict themselves of course, people want less tools but need more tools to cover the attack surface, so how are we going to solve that problem?
I think that the only approach that will scale, is to aggressively scope the size of the problem down to only spend time on what matters and be able to determine that fast and accurately. Some may nod and say to themselves something like, ‘we already do that’, but I don't think that the majority of teams do.
What I hear time and time again are stories like the security teams scanning all their code in all their repos using SAST and focusing on the high risk issues that come out. I also hear, often from the same people, that the vast majority of their code repos are not in production or manage anything they consider of high value. They just don’t have the tools to systematically scope the problem down by hand so approach the issues with FOMO.
It's the same with SCA. If I had a dollar for everyone I heard about chasing down Log4J alerts to see if the code was actually in production, I wouldn't be writing this article. I would be riding my bike in a warm place.
What I hear from people that use DAST tools is that they don't know where all the endpoints are, yet alone the ones they should be focused on, so speculatively scan IP address ranges and entire domains. You have to first filter out a load of noise before you can even start.
What I heard over dinner on Monday, with a long term advocate of threat modelling, is that the biggest challenge companies have when trying to scale their threat modelling programs is knowing what to model. They try to build models for all the apps first and then can decide what's important. It's a chicken and an egg situation.
Until we, as an industry, can solve this problem, I think there is little point in investing in more tools. More tools to cover gaps we know about will only result in more noise to process and create a vicious cycle we may never get out of.