DISPELLING APPSEC URBAN MYTHS
After a decade away, I have found my public speaking voice again. My 2023 topic is to dispel urban myths about application security.
This is article is cross-posted to LinkedIn here for discussion and comments.
Here is one such myth and a few inconvenient truths.
If you look at the recent surveys from HackerOne (you have to register) and BugCrowd about the types of bugs found by bug bounties in 2022, you will see that the overall state of application security has hardly changed in a decade. It's the same old vulnerabilities being found over and over again, and the amount of vulnerabilities are getting worse.
Financial services companies on Bugcrowd’s platform experienced a 185% increase in the last 12 months for Priority One (P1) submissions, which refer to the most critical vulnerabilities.
The OWASP Top Ten has hardly changed in the last decade and the ‘best practices’ that appsec teams adopt, seem to have hardly changed either. Run SAST, DAST and SCA tools, train developers, do threat modeling and a familiar list of other “best practices” that goes on.
We have been road testing the people, processes and technology of appsec for over two decades at this point , so how come the industry at large is still doing the same things that are clearly not working?
Maybe it's confirmation bias? You teach people what key issues are, and they go off and find them. I think this is likely. The implication being attackers continue to be ahead of defenders.
Maybe it’s that people simply don’t correlate effort and results so keep putting effort into things that appear to be sensible but that don’t work? I think this is likely.
Maybe it’s continued unsubstantiated marketing from tools vendors ? Almost definitely.
Maybe it’s peer pressure and wanting to fit in? I think this is likely.
Maybe it’s ineffective or incomplete implementations of best practices? Maybe.
It is unlikely that any one thing can be attributed to the impotence of application security as a discipline, but it is likely that a collection of these things, roll up into a ball of appsec urban myths.
I think shift-left is one such myth.
The premise of shift-left is two fold. The first is that the cost to fix a security issue is incrementally cheaper while developing software than it is after it has been built, and therefore as much security should be done upfront by the developers themselves before or as they create code. The second is that a gram of prevention is better than a kilogram of cure.
Said another way, companies should spend their limited cycles on threat modeling, training developers how to write secure code and running security assessment tools as code is created. There is no doubt in my mind that all of these techniques improve security but shifting your efforts (shift means to move or cause to move from one place to another) to the left, with the goal of better and cheaper security is likely a myth.
We have all seen some variation of the phrase ‘it's 100 x cheaper to fix bugs in development than it is in production’. It’s usually seen as a marketing tagline, often next to ROI calculators. Some examples are Deepsource, Synopsys, IBM , Perforce, Smarttbear (the irony in that name), Whitehatsec and Cigital, Grammatech, Security Boulevard, and there are instances from NIST. The 100 x statistic varies from five x to several hundred x, but the multiplicative mathematics and the promise is the same.
Some of this stems back to a book, Applied Software measurement by Capers Jones in 1997.
The ….research by Capers Jones found that the cost to address bugs post-release costs $16,000 to address, but a bug found at the design phase costs $25. That means valuable QA budget is being spent on fixing bugs that could’ve been solved for much less, and earlier on in the release cycle.
This urban myth comes from a so-called IBM Systems Sciences Institute report that, guess what, didn’t even exist. Laurent Bossavit wrote up his analysis Degrees of Dishonesty posted as a Gist on Github that was later covered by the Register in 2021. Yes El Reg. Here is a quote from the Register article.
Bossavit took the time to investigate the existence of the IBM Systems Science Institute, concluding that it was "an internal training program for employees." No data was available to support the figures in the chart, which shows a neat 100x the cost of fixing a bug once software is in maintenance. "The original project data, if any exist, are not more recent than 1981, and probably older; and could be as old as 1967," said Bossavit, who also described "wanting to crawl into a hole when I encounter bullshit masquerading as empirical support for a claim, such as 'defects cost more to fix the later you fix them'."
That's right, the evidence, that has become the appsec folklore that shift-left makes sense, simply doesn't exist.
Bossavits excellent ebook, The Leprechauns of Software Engineering, How folklore turns into fact and what to do about it, dispels many other similar myths and is highly recommended.
I am sure you can find ‘pay to play’ research by firms that specialize in saying what you want them to say in exchange for money. I never cease to be amazed at the bullshit the Ponemon Institute (pokemon?) chucks out, not unlike the load of nonsense used by shampoo advertisers all the time. They even set their own responsible information practices so you know you can trust them, just like some ‘private’ universities in the US. Fake diploma sites even advertise that they can be trusted while their competitors can’t. Some days I am embarrassed to be part of the security industry.
If I were to be running an appsec program, which I am not, which I haven’t done for well over a decade, I would apply people, process and tools to the right, to the left and to the middle. I would place my chips where they are needed.
My version of 'on the left' would be something like
Make ‘anything and everything security’ available to those that want it.
Only focus on things that matter. Risky apps, apps that are in production and apps that are important to my business. Scanning all my repos is a waste of time and generates noise and confusion. Do things like threat modeling on these.
Lock down and configure developer tools to use secure defaults. Protected branches, standard tests in PR’s etc.
Train developers about the impact of insecure software to the company, probably a simple lunch and learn, once a year. They read the news and are intelligent for fucks sake.
Train developers JIT (Just In Time) about specific issues when they are found.
Eradicate classes of vulns by enforcing frameworks.
Set up a crack team of first responders.
Create “office hours” and “phone a friend” for live expert help.
Tag all the code so you know who owns it, what it does and what it is for.
My version of 'in the middle' would be something like
Lock down my CI/CD system including using something like SLSA.
Run SAST and SCA.
Use custom SAST to look for “hot spots”, sensitive code changes.
Use custom SAST to understand the architecture, data flows and any changes to it.
Build a fast security feedback loop into the development process.
My version of 'on the right' would be something like.
Know what I have in production, know what it does, know how it works and know who is responsible for it. Most people do not know this today.
Know how my applications and their associated infrastructure (cloud, online services etc) are configured, what they do and who is responsible for them.
Invest in production observability tools.
Invest in protection tools.
We should be “on the right, in the middle and on the left” and I think shift-left is a dangerous urban myth.