Using the Live Application Inventory Feature of the Crash Override Platform

Frame 17@2x

How to build and automatically maintain an application portfolio

To get access for free, join our early access program.

The definition of what an application is, has never been well defined in the appsec community. I wrote an article about this last year, What the bloody hell is an application? You could argue that a definition doesn't matter, it's just a taxonomy, but people are securing ‘applications’ and the clue is in the name, appsec. If you can’t describe what it is, then it’s like going into a barbershop and asking for a short haircut. A skinhead, crew cut, short back and sides or even a mullet, heaven forbid, are styles you might legitimately come out with.

Standards like PCI DSS say that you must maintain an asset inventory, but they are so out of touch with people building cloud-native applications it's not funny. I mean you really want the serial number of the server hosting your front-end ? That said, what we have been hearing time and time again is that people want to have application inventories to support use cases like incident response and supply chain compliance. When the brown and smelly stuff hits the fan in production, who are you gonna call? It's ain't ghostbusters and most people don't know who the business owners or developers are. The countless stories of people trying to find who worked on an app are unbelievable. That use case is far from exclusive to security  and has seen the rise of developer portals like, which we, by the way, will integrate with soon and keep  up-to-date for you automatically.

These days people are being asked to supply SBOMs for their applications. Business people and lawyers talk in terms of apps and not repos and so the mapping is left to the engineers, as of course is the hard work getting the data that follows. Supply chain management is ultimately about the supply chain for an application. 

In my article What the bloody hell is an application?, I suggested that we should take a fairly maximal view of what an application is. We should think of it as being made up of all of a collection of code repos, hosts, infrastructure, services and data-stores that work together to provide a specific set of functionality. Far from perfect, especially as I read it back now, but it was a start. On reflection I think there are both a set of ‘things’ that make up an application as well as attributes of those ‘things’ such as the code owners, service owners, end-points, vulnerabilities, open source dependencies and so on. And that is quite a list. All of these ‘things’ have many-to-many relationships across an application portfolio and team environment. 

In that article I went on to share that we had been thinking about creating a tool we were calling Codeography to profile code to determine attributes of it that can be fed into an application inventory. This is something we have been able to pick up again now that the core ZAP team is onboard with the Open Source Fellowship. Knowing an application tech stack, cloud service end-points and data classification information is extremely interesting when configuring DAST, although we think Codeography, which will be open source, has far greater applications than just configuring DAST. When combined with the data that Chalk can pull in, you know everything about it from how it's made up to every change that has happened to it. 

Designing and building a fully fledged application inventory (also often called an Application Portfolio) is clearly going to be complex, hard and take time. We don't have all the requirements, yet let alone all the answers but we have made a start with the Crash Override platform that you can use for free now, so I wanted to share some screenshots and describe how it works. 

The TLDR of how the platform works, is that you add a single additional step at the start of your CI/CD workflow, We call this chalking.  This is basically like adding an AirTag to everything, which then automatically captures all changes that happen across the entire pipeline. 

In the platform you connect your GitHub and AWS organizations (other platforms are supported) and then Chalk data can be automatically mapped. One effect of this, to illustrate how it works under the hood, is that the platform then knows exactly what code (repo, commit, build provenance etc) is deployed where, when it happened, what changed etc at any time. A lot of etcs but there is a lot of useful data that is captured. 

Building an app inventory is a feature that is built into the platform and takes a few clicks once you have Chalk deployed and have connected your AWS and GitHub orgs. 

To get access for free, join our early access program.

Define Your Environments

The first thing you do is set up your environments. An environment is really nothing more than a collection definition in your workspace. The canonical environments are probably Production and Staging but we know people have all sorts of naming conventions so support anything. 

You can define an environment today using AWS account names or account numbers ie Account Number XXX-XXX is production and Account Number YYY-YYYY is Staging.  After you have defined an environment, anytime we see a repo being deployed to a cloud service that is running in that environment we will mark that repo as associated with that environment i.e. Repo ZZ-ZZ is a Production repo. 

Note: we are considering improving this to allow the selection of AWS tags, regions and other information so if you would like to see more options, then sign up and let us know.


Create an Application 

After you have defined your environments, you create an application. An application is the live collection of repos and services and of course any attributes and changes to any of those things. 

You first give it a name.

Apps Image

You then start by choosing the repo.

You then start by choosing the repo.

You will notice at the foot of the repo listing a checkbox to add the mapped services. This is the first bit of magic.


Because Chalk has been deployed, the platform knows about all of the builds of code and where it was deployed. This means that all you need to do is to select the repos and the platform automatically knows which services it is deployed to, and of course the environments they are in. It already has the data about all the changes that have flowed from those repos and so you get the full history of all changes, builds and data like the code owners since Chalk was configured. 

We can automatically do the mapping backwards from services to repos, but have chosen to disable that in the current release to make the UX easier. Depending on demand we may bring that back.  We also know of other repos deployed to the same services so can ask you if you want to add those and because we are always watching, anytime anything new that affects the application happens, we know about it and can automatically update the application or ask you if you want to.

App Details

You can see from the screen shot above, the platform has determined that the selected repo is deployed into four services including production and QA. 

You can click into any service that maps to the repo.


You can see from the screenshot above that you immediately know all details about the cloud service itself. 


You can also see exactly what code commit is running in production. It's linked to the commit on Github so you can inspect the code change.


You can see all the deployments that have happened over time


And you can inspect any build and soon see all the changes across a time period.


In a followup post and in a platform update we expect to push in the next few weeks, I will show you exactly how you can see what libraries are running in production and therefore what vulnerabilities are actually deployed versus what is just in a repo or on a container in a registry. 

As I mentioned above, this is very much a first version and somewhat alpha / beta by way of feature depth but from feedback we have had, it's beyond the static developer portals and heaven forbid spreadsheets people are using today.

In the coming weeks (yes we release continuously) and months you will be able to get all of the vulnerabilities from OSV to show up in your application views, and be able to search across any application and environment for libraries. Is this vulnerable version of this library actually in production! The Codeography work will also add data to the application definition such as the tech stack, end-points that are exposed and what data is being managed, and we are thinking about connecting the people attributes to corporate identity providers. 

As has been the case since day one, we love feedback and ideas, and for the roadmap to be shaped by users. If this would be something useful for your company, sign up to the early adopters program for free. 

To get access for free, join our early access program.