THE SECURING OPEN SOURCE SOFTWARE ACT OF 2022 IS NOT WELL THOUGHT OUT
Unless you have been living under a rock for the last few weeks, then every news outlet for security, in particular Twitter, has been a firehose of posts about SBOMs. This frenzy was accelerated with President Bidens Executive order 14028 and then further accelerated last week with S.4913 - Securing Open Source Software Act of 2022.
In my opinion it is simply too early to declare that we have turned the corner securing open source software.
Software composition analysis or SCA works by taking the list of open source dependencies that are present after a build. It is important to acknowledge that the build took place under a specific environment, and at a particular point in time. It then looks up known vulnerabilities in the versions of the dependencies that were present to create a list that the user, or increasingly automated software can then update, remediating the vulnerabilities found. That list of dependencies that were present and meta data such as their known vulnerabilities and their licenses can be written in an exchangeable format, an SBOM or a software bill of materials.
There are important things to acknowledge here.
The first is that almost all package managers are non-deterministic, meaning that they will resolve dependencies by solving for all input conditions and not conditions individually. As well as the dependencies specified in the build definition file, other input conditions include dependencies that are present in the hosts cache, current versions of the transitive dependencies (dependencies of dependencies) and even the version of the runtime on the host machine. There are a lot of options and optimisations that take place making the dependency resolution process and package managers that perform it, very complex indeed.
A practical implication of the dependency resolution process is that unless you build a project on two identical hosts then it is unlikely you will get the same output. That also means that unless you build a project on two identical hosts then it is unlikely you will get the same SBOMs. That also means that unless you build a project on two identical hosts it is possible you will get two different lists of vulnerabilities. This is clearly a problem. We can not have a situation where the same source code being compared to the same vulnerability database could result in two different sets of vulnerabilities.
What this means is that when you are presented with an SBOM that states that the open source software has a clean bill of health, what you are actually getting is an SBOM that states that when generated in a particular way that the SBOM presented showed a clean bill of health. What it doesn’t tell you is how that clean bill of health was determined.
Presenting a clean bill of medical health without knowing the tests, the labs that performed them, the conditions that those tests were carried out under etc. is not something you can rely on. A sick note from the school nurse is useful but probably not if you hand it into an A&E trauma team.
SBOMs today are in my opinion useful for software producers to do the right thing and not knowingly ship code with known vulnerable dependencies but are more useful for consumers to check the code they are using for known vulnerable dependencies.
They are not an effective attestation mechanism of what is being shipped between a producer and a consumer, the use case being pushed by the Security Open Source Act.
A possible solution to this problem is for an SBOM to capture the information that defines the build environment in a way that the consumer can both understand and then replicate those conditions. CycloneDX has recognised this problem and is working on a solution but it doesn't exist today
A second problem lies again in that the primary use case being pushed is an attestation one. Procurement is a static event, open source consumption is dynamic. Dependencies are published all the time, updated all the time, builds happen all the time and deploys happen all the time. What was originally shipped to a consumer as part of a government software procurement process is almost certainly not what is being deployed, at least you would hope. This is not a new security problem. When systems are certified against the Common Criteria that process takes a long time and the versions of that software subsequently deployed after hot-fixes is unlikely to be the same version as those that were the subject of the attestation.
Once again SBOMs today are useful for producers to do the right thing and not knowingly ship code with known vulnerable dependencies but are more useful for consumers to check the code they are using for known vulnerable dependencies when they are using it.
Solutions to this problem are running SCA combined with automatic updates at every build and hot patching in production. What is not a solution to this is dependency pinning, a technique getting widespread adoption but very dangerous for security in the real world where updating dependencies is referred to as dependency hell. I plan to write about this in the coming weeks.