The AppSec letter bomb problem

DALL·E 2022-10-03 20.09.06 - abstract art of an envelope and a bomb


In 2012 I was involved in leading the investigation and recovery of a hack that had taken an entire oil company offline in the middle east. We were first on site initially meeting almost daily with the CEO and there until they were back up and running. The story has since been written up in the mainstream press as The inside story of the biggest hack in history, quoting information from a consultant who we never met, but it's mostly accurate.

What was interesting about Shamoon was that it used a digitally signed driver. Quoting an article in Dark reading 

The most sophisticated part of the code: how the attackers employed a legitimate, digitally signed driver inside. "The only part of the system that can be mentioned is that they used a benign third-party driver -- signed -- to overwrite the files on the systems," Blasco says. That driver was signed by EldoS Corp., which provides security-related software components for software developers and the corporate market.

One lesson I took away from weeks in Saudi Arabia was that code signing is useful but only when used in conjunction with other security controls. 

This is not new, Bruce Schiener has written about this on his blog and in his book Secrets and Lies. Stuxnet which targeted the Nuclear program in Iran used the same technique. Two oil producing nations that both fell prey to the same technique at times when the price of oil was volatile and production levels dictated prices. Interesting. An academic paper published in 2018, Certified Malware: Measuring Breaches of Trust in the Windows Code-Signing PKI explains that this is not uncommon. 

There is a renewed interest in code signing open source packages and build artefacts in CI/CD pipelines with projects like Sigstore and SLSA. I love both projects and think the teams behind them are brilliant, but it is worth remembering that code signing is only intended to authenticate the user or entity that produced it and to prove the integrity of the target since signing. 

Code signing provides nothing towards solving the malware problem, in fact it can detracts from it. An unsigned driver would never have been able to be loaded by the Windows kernel to deploy its malware. 

I have recently heard a number of people talking about and believing that code signing will help and even prevent malware from entering and executing in their CI/CD pipelines. It won’t. 

Garbage in, garbage out. Digitally signed garbage in, digitally signed garbage out. 

If we are to solve the malware problem in open source dependencies then we have to have policies and build systems like Apples Gatekeeper. Apple reviews each app in the App Store before it’s accepted and signs it to ensure that it hasn’t been tampered with or altered. Google has a similar policy and system

Of course this approach is not infallible either. There is no such thing as zero risk and all that jazz. Malware checking and code signing combined is the best we have today, and projects like the Package Analysis Initiative from the OSSF and promising.

I can count on one hand, companies that I know who actually check the code of dependencies that enter their supply chain before signing them. I can count on one finger companies that check new releases of all dependencies, ie a full code analysis of all open source they use. Most companies just sign packages. It’s certainly better than nothing but it's not protection from open source dependency malware. 

Bruce Schnier once described signed Windows DLL’s as a letter bomb problem.

Knowing who sent you a letter bomb is not top of your mind when it has just gone off.