WHERE ARE THE EMPIRICAL STUDIES?
As always, you can subscribe to our newsletter at www.crashoverride.com. It’s just like hitting the like button on a youtube video. This article is cross-posted on Linkedin for comments and discussions.
If you look at the 2022 annual reports from bug bounty companies BugCrowd and HackerOne, and you consider that the OWASP Top Ten has hardly changed in the last decade, you might be wondering like me, why are the same old appsec issues still a thing in 2023? I am not going to claim I have a good thesis, yet alone a strong opinion, but I do have a lot of questions.
Maybe developer training doesn't actually work? People have been doing it for years, and many companies spend millions a year on it. Logic says training people about the issues surely helps people avoid them, but perhaps the amount of time you can reasonably expect a developer to undertake training doesn't actually have the impact we need or expect ? Perhaps the sheer variety of potential issues means that “top ten” style training doesn’t work? Again logic would also say that you wouldn't then expect the same old issues to show up time and time again if you did this. Maybe generic training about specific issues, and then expecting a developer to translate that to the technology that they work on day-to-day doesn’t work? Expecting a Node developer to go from learning about an issue like XSS to creating robust defences in their app may be too hard.
Maybe the explosion of tech is the reason we can’t keep up? Back in the old days you controlled the architecture, the way connectivity and integration happened and lots more. Today you will more likely have a front-end built on one stack, a set of backend services built on another and even more likely services not even written by yourself. It seems unreasonable to expect a developer to know about how to secure “all the things”, and unreasonable to expect a security professional to know how to secure “all the things”.
When I was at MSFT, I owned one of the threat modelling tools, and was involved in writing Improving Web Application Security: Threats and Countermeasures. I can’t believe you can still download that book in PDF from Microsoft for free. I was a big proponent of threat modelling while at Microsoft, and used to spend afternoons in JD Meiers office, sketching out ways to do it better and simpler on a whiteboard. I then became a detractor thinking it didn't really fit into automation and am now a fan again. Well sort of. Tooling is no doubt way better than it was, but my continued skepticism is that you can’t automate threat modelling in a CI/CD environment, and in a DevSecOps world that doesn't scale.
A good friend told me the other day that he is responsible for securing 6,300 Github repos. His team can’t possibly build models for those in order to decide what's important. He currently scans all 6,300 with SAST tools by the way.
I also can’t get my head around the fact that most people don’t know what code is running in production, and where it is, yet alone have a good understanding of how it interacts with their clouds and data. I suspect a lot of people are still in “spray and pray mode”, and so the spray part is missing important things.
Maybe we have created a self fulfilling prophecy? If we teach consultants that a specific set of issues are the most important and how to find them, then it only stands to reason that as an industry we get better and better at finding them. Teach a man to fish and all that. This is the only thing I have strong conviction on.
Maybe the data is biased because tools are so good at finding the same issues? I don't think any tools vendor does not find the OWASP Top Ten, yes raised eyebrows, and benchmark themselves against competitors doing the same. Tools have definitely gotten better over the years.
Maybe it's that people rely too heavily on frameworks? We assume they take care of complete classes of issues, but in reality they don’t, or at least not in ways that security researchers think.
Maybe it's not even getting worse but the bug bounty reports have a bias to sell more bug bounties?
Maybe it's something or somethings else entirely?
What I do know is that as an industry we seem to keep doing the same things over and over again, and having been doing so for decades, without good studies about cause and effect. We still believe myths like shift-left that aren't grounded in truth.
Sure, there are studies that show specific tools or services are the best bang for the buck, before suddenly as if by magic the shopkeeper appears. I don't think anyone takes them any more seriously than a vendor ROI calculator. As an industry, it feels to me that we seem to keep doing the same thing over and over again, for no real reason apart from that it’s what everyone else does and what we think works. In the 19th century, people used to think milk was a good idea to transfuse into the blood and cigarettes were a cure for asthma.
If we don’t change, then we have to expect that the state of insecure software won’t change.
Maybe it just doesn't matter. Things get worse, everyone keeps busy, everyone keeps making money and the world keeps turning. Maybe.