Back

The Security Industry Needs More "Easy Buttons"

Image: The Security Industry Needs More "Easy Buttons"

Today's security spend: The bare minimum or an upper bound?

The Security Industry Needs More "Easy Buttons"

Background

Computer security is a topic that is both wide and deep.

We don't expect security experts to deeply master the knowledge of every sub-field. For instance, most people well versed in web app security might be expected to have some familiarity with cryptography, but few people are deep in both, and it's likely that nobody is deep in all areas of security.

We cannot realistically expect an expert to master every nook and cranny of security. But we can expect experts will generally follow industry first principles if their job wanders outside of their subfields. For instance, we would expect someone not deep in Cryptography to either play within the bounds of well vetted, well-documented abstractions, or to consult someone who is an expert in the subfield.

Certainly, the average software engineer is unlikely to ever know much about security. In some cases, developers will lack the interest. But even where there may be interest, other concerns driven by their employers' missions (which will rarely be steeped in security), will limit their ability to become deep experts.

We cannot reasonably expect most developers will become experts. And certainly, there is some finite amount of bandwidth developers will have to deal with security considerations.

These considerations are often captured in several industry key questions:

  1. How much developer bandwidth can we reasonably commit to security, before the economic impact is too high?

  2. How to incentivize engineers and their employers to maximize their investment in secure by design?

  3. How to push for accountability in following best practices?

  4. How to make the effort that development orgs put into security as effective as possible, given the available resourcing?

Recently, the security industry has focused primarily on recommending activities that significantly ramp up developer time on security, while at the same time, pushing for accountability, be it legal liability, or via compliance frameworks that can be demanded during a procurement process.

The work our industry has been pushing to developers has often seemed to people outside the industry like busy-work with no direct benefits to their organizations.

The industry pays far less attention to the views of developers outside the industry, to the point where, if we want to demand accountability but push too much work onto them, the result could be economically catastrophic, as it could greatly disincentivize software development in the United States.

Despite optimistic projections, economists at the recruitment site Indeed recently provided data indicating that the rate of job postings in the field has dropped by 2/3 over two years ago, dropping by more than 50% in just a single year, between November of 2022 and 2023. This was a far sharper drop than in other fields, which typically saw a 15% decline in that one year period.

Certainly there are other factors at play, but Innovation, particularly in software, has been a key economic driver for the United States over the past few decades, and we must be particularly wary about adding further disincentives when the sector is clearly struggling, lest we cede our world leadership in Innovation.

Example: SBOMs

Today, the security industry has been pushing the industry to implement Software Bill of Materials (SBOMs), and hope to drive adoption by incorporating this into compliance mandates.

From the security industry's perspective, the benefits are obvious. However, the typical development organization has an entirely different perspective:

  • The direct benefits to their organizations, knowing quickly when they have vulnerable third party software, are benefits they already receive from commercial vendors, such as Software Composition Analysis products, or even their source control platform (e.g., Github's Dependabot).

  • They see the work as benefiting only a few larger technical companies, who are cash rich, and generally enjoying benefits of one or more effective monopolies. In practice, very few companies will take in SBOMs from other vendors, and use it as a significant force for driving down risk.

  • The companies that do have resources to pursue such programs are often in high-risk sectors, like financial / government, and this is sometimes seen as an unfair offloading of their own cost of business.

  • More importantly, there is a practical fear of unreasonable indirect costs associated with adopting SBOMs.

  • Additionally, the technology ecosystem is not sufficiently developed enough to make adoption straightforward, or keep costs low.

To further complicate matters, existing SBOM tools have high levels of false positives, for lots of practical reasons.

One reason for false positives is that tools tend to scan the file system at some point during the build, and will report on what they see. That's rarely going to be an accurate representation of shipped artifacts. For instance, it's very common for something that is present during the scan to be used only as part of the build process, without ending up in the final artifact.

Another reason for SBOM false positives is that companies often have practical controls that mitigate risks that do show up. Those controls are generally operational, which is out of purview for the SBOM tools collecting data.

Today, SBOMs for many applications are incredibly large. If vendors distribute those artifacts downstream, they will need to also bear the support costs of dealing with questions. Those questions will often be answered with information about compensating controls, but nothing in today's current SBOM ecosystem helps automate that process; it will require human-to-human contact.

Beyond the costs associated with noise, the SBOM ecosystem is filled with practical challenges.

On the business side, some companies will view SBOM disclosure as exposing too much intellectual property, even where the components are third party.

But the operational challenges are even more significant. More than one large company has invested multiple full-time equivalents to SBOM adoption over the past two years, and are nowhere close to effective adoption. Even if accuracy were not an issue, there are still massive challenges:

  1. Most SBOM tools currently only do a good job reporting metadata on a subset of core technologies (languages, package managers, etc).

  2. Companies are grappling with more than collection of SBOMs. One needs to store them, manage them, distribute them (probably with some level of access control), and so on, especially when single software packages are inevitably going to have multiple associated SBOMs.

  3. In that vein, most companies are finding it an insurmountable challenge to cost effectively give people confidence that the SBOM they're looking at right now maps to the version of the software they're using right now.

  4. The challenge is even worse for most of the tech industry, where SaaS, microservices and continuous deployment are all common. Almost all large organizations with a heavy technical bent have such dynamic systems that often nobody is expected to have a deep understanding of the major components and their relationships. It's akin to a large, dynamic city like New York-- people usually understand their neighborhood intimately, and several others well. And while they will have a rough sense of the city as a whole, they will be about as good as a non-resident for understanding the rest.

The last challenge is more significant than one might think, because, despite any efforts to automate understanding of such systems, most such companies still are far away from a state they consider ideal; plenty of time is spent internally "asking around".

"Easy Buttons"

The purpose of the example is not to argue against the value of SBOMs. If the industry can solve the hurdles mentioned above to the point that adoption is essentially trivial, the downstream benefits are there.

What the SBOM example shows, however, is an example of the security industry pushing work onto development organizations, with little regard to economics.

Here, we are not only alluding to the costs of implementation and maintenance of a very green solution, but also the opportunity costs associated with asking organizations to drive SBOMs.

When SBOM adoption is such a significant project, organizations have less bandwidth for other security projects. However, once we can make SBOMs low cost, low risk, and trivial to implement, we can easily make another ask.

But, the other asks the industry currently make also have associated costs that are competing for finite resources.

For instance, development organizations are also often asked to do all of the following things, which are also very rational asks:

  1. Review and act upon analysis results from static and dynamic analysis tools, even though most of the results will not have enough context, and will thus be false positives.

  2. Sit through regular periodic security training, even though data tends to show retention is usually negligible (except in individuals who are often recruited as 'security champions'-- many feel this is the primary value of such training).

  3. Perform 'threat models' for their applications or components, even though they rarely have the expertise required.

  4. Switch to 'memory safe' languages, which is, in many cases mostly irrelevant, because the bulk of the most popular languages are already memory safe, including Python, JavaScript, Go, and Java. But for companies that are using C, the switching costs can be exorbitant (discussed below).

  5. Use technological best practices around things like cryptography, input validation, authentication, authorization and access control.

The last ask covers a lot of ground technologically, and has had a huge, demonstrable practical impact, but actually is vastly less imposing and practical than the others, both for customers to demand, and for vendors to deliver on.

Two Decades of Unnoticed "Easy Buttons"

However, twenty years ago, the same asks around technology would have been insurmountable.

That's because, in those areas, the security industry has taken complex topics full of landmines, and turned some of them into "easy buttons" that require virtually no effort.

Take secure web communication, for example. While the TLS protocol has a long history and has always aimed to abstract away the complexity and common risks, it has only recently become trivial to adopt, despite a multi-decade journey.

Specifically, while TLS's predecessor did do a lot to abstract away the need to understand the many types of low-level cryptographic algorithm for the average developer, it was, for a long time, common to find connections "protected" with TLS that were nonetheless easy to man-in-the-middle.

This was a multi-pronged problem.

  • End-user usability-- putting up danger signs often led to people actively clicking things they didn't understand and weren't in their best interests.

  • APIs-- for a long time, many APIs did little to none of the recommended default authentication, and few developers knew there was even anything to do.

  • Operational-- like setting up testing environments, never mind provisioning and rotating production key material.

  • Technological-- such as complexity and competition (for instance, on authentication approaches, never mind the core protocol).

Nonetheless, we are at the point as an industry, where "use TLS for network communication" is almost not even an ask. In most environments (there are some exceptions), it's mostly on par cost wise with NOT using encryption. Being a great "easy button" in most environments, it makes it incredibly easy to demand it. It is so straightforward to both define and implement the requirement for apps running on desktop or server processors, that it would make sense to talk about legal liability for companies that don't follow best practices, when they should be able to do it.

User management requirements (identity / auth / etc) have made great strides towards being an "easy button", in that, unlike twenty years ago, there are commercial solutions that don't take a lot of knowledge or experience to get right, making it easy to outsource the most challenging pieces.

Twenty years ago, to have a web application, you needed to be the equivalent of an identity provider yourself. Today, few people would ever dream of it, and both the IDP integration work and the work incorporating things like 2FA is all reasonably easy, thanks to the tireless work of many people.

Though with user management, there's still a lot to be done-- not only around deemphasizing the password (e.g., via PassKeys), but generally there's still a lot of inherent complexity, amplified for the moment by the process of resolving competition (both between vendors and between standards / tech).

Still, it's moving quickly in the right direction, and if the ecosystem hadn't progressed so much, development organizations would surely be less willing to spend cycles on things like SBOMs, which everyone should agree should be a lower priority than user identity and auth issues.

Certainly, we can't expect that all security requirements could ever get down to zero effort (or close). But, the more we can drive the cost of implementing requirements to zero, the better off we all are.

Even hardware manufacturers have attempted to provide "easy buttons", implementing a wide array of technologies to fight common exploitation techniques (which is especially impactful for C applications). They've also done a lot to make cryptography cost effective, pushing a lot of the most expensive operations into hardware, making it more practical and cost effective to adopt things like TLS at scale.

"Easy Buttons": Bad for Security Businesses?

The many people involved in the above "easy buttons" are effectively modern day heroes. But we certainly don't have enough easy buttons.

For better or worse, to date the security industry has generally acknowledged that prevention is ideal, yet still has primarily focused on detection and response.

One could cynically say that vendors are better off economically treating symptoms, and not curing the core problems.

But really, the challenge is that it's hard to make things easy, especially when doing so often requires incredibly deep expertise, and (as evidenced with TLS and auth), incredibly long journeys.

And while, for clear economic reasons, pursuing "easy buttons" is not likely to be the primary focus for most security vendors, it should be the case for the Security industry as a whole.

People in the industry who are not vendors should be leading the charge to set industry priorities with a long-term view, and look to nudge innovation funding (e.g., DARPA, NSF, etc) standards bodies and regulators to think in terms of such "easy buttons" for software.

To get the best outcomes, we have to focus on balancing complex costs, so we should drive innovation to simplify the implementation of solutions to our greatest risks.

And being wary of the limited resources in development organizations, we should try to manage our asks appropriately.

For instance, coming back to SBOMs, the industry should continue to demand evidence for running tools, but defer turning the ratchet until the ecosystem and adoption is far more mature.

It's certainly possible for SBOMs to be something that can be collected automatically, to tie in data from compilers and from operational environments to reduce the noise, and to make it all trivial to operate, while giving people assurance that they're always looking at the right information.

But getting the entire industry to that point is not currently a focus of the community. In fact, the two leading file formats are still fighting for dominance, which isn't particularly relevant from a security perspective, except that it's draining resources and attention away from everything the industry needs to do to make SBOMs as trivial to implement and benefit from as TLS.

Nevertheless, that should be our goal.

Recommendations

The security industry should work together and push hard on long-term initiatives to develop end-to-end solutions in pursuit of making it easier for development organizations to provide better security by design, for their users. This should be done keeping in mind the following principles:

  1. Target initiatives that will most clearly have high impact.

  2. Focus on making solutions as easy to adopt as possible.

  3. Push for initiatives to have short-term jumps in usability, like with TLS. Even if solutions could end up trivial, like with TLS, it definitely will take years to achieve for any initiative.

  4. Vendor neutral outcomes should be the goal for individual initiatives by the end of the process, even if current vendors are key to driving the necessary innovation at the beginning of it.

  5. Tandem movement with the regulatory compliance environment initiatives, so as to maximize the impact of developers' time every step of the way.

  6. Push for standards among regulatory compliance environments which are both easy, and require little understanding to adopt. For instance, with TLS, it's fine to mandate many technical requirements, as long as they can be abstracted to the point where libraries and tools can easily assert compliance.

Initiative Examples

  1. Development of a full-featured and open toolchain that is fully memory safe for embedded environments. Currently, despite rhetoric for memory safety, the tooling and constraints in embedded environments are such that approximately 95% of embedded applications are written in C or C++. Making memory safe languages available for those environments is not enough given the constraints of those environments.

  2. Drive to make embedded systems trivial to build in a way that they can be upgraded as easy as practicable without an unwarranted increase in risk. Embedded environments pose one of our biggest risks not just because their software is typically built with memory unsafe languages. There are many factors at play, mostly stemming from the economic constraints, such as the lack of security features on chip, and the extreme difficulty of upgrading systems, especially when systems will have rare network connectivity, if any (this is true not just for consumer tech; it is especially true for critical SCADA systems). For instance, what can we do to retrofit SCADA systems to make them easier for their developers to patch; we need to make it far easier to manage distribution and adoption of fixes, without driving up the operational risk associated with keeping such systems connected to the public networks.

  3. Make it as easy and automatic as possible to trace software through its entire lifecycle INTERNAL to organizations, and to be able to leverage that data to automatically provide missing context that would greatly reduce false positives in today's software security analysis tools. (Today, the noise is great enough that the vast majority of alerts from these tools will not be reviewed).

  4. Current initiatives around third party software risk, such as SBOMs and build attestation should continue, but shift towards convergence, opening us up to drive ease of adoption and management.