Cloud Architect Lead, Stuart Barnett, explains how Google Cloud can help you secure your software supply chains.
Anyone who has had the misfortune of watching me try to dance might be rightfully puzzled by me writing a blog with the above title - though probably rather less surprised by the ever-present typo. Similarly, any victim of my cooking would be amused to hear I’ve suddenly become an expert in preparing Latin American cuisine. Fortunately for all, this has nothing to do with either; although this rather light-hearted intro belies an altogether more serious problem: that of securing software supply chains.
For the record, a software supply chain is pretty much like any other supply chain in traditional manufacturing. The end product (software) that will be consumed (run) is assembled from a number of component parts (artifacts: libraries, source code, binaries, Docker images, etc) via a number of tools and processes (CICD). These stages (source-> build->deploy->run) form a chain, each element of which helps determine the contents and capabilities of the end product. Unfortunately, this has also presented new attack vectors to malicious actors, looking to exploit any vulnerabilities that may be introduced at any stage of the chain. The following figure illustrates a typical software supply chain, with the main threat vectors highlighted. As you can see, even a simple pipeline offers up multiple potential attack surfaces.
(credit: - Google Cloud security blog 21/06/21)
These so-called “supply chain attacks” have been big news recently (see SolarWinds, CodeCov, Kaseya as examples), with huge consequences both for businesses and their customers - in each case, vulnerabilities in the supply chain were exploited by bad actors to deploy ransomware. In the case of Solar Winds, the effects were so serious as to threaten critical US infrastructure - so much so that in May 2021 US President Joe Biden signed Executive Order 14028, “Improving the Nation’s Cybersecurity”, which explicitly calls out the need for US government departments and critical infrastructure to work to improve their software supply chain security.
Of course - those of you developing microservices are probably thinking “hang on - I have CICD pipelines for each of my n services…..not to mention my infra pipelines…” - and yes, each of these could potentially have their own points of attack. In an enterprise, this could be thousands of separate build and supply chains. And how many of those microservices depend on open source software? As Eric Brewer, emeritus Google Fellow and founder of the Open Source Security Foundation recently commented:
“99% of our vulnerabilities are not in the code you write in your application. They’re in a very deep tree of dependencies, some of which you may know about, some of which you may not know about.”
A big problem just got bigger.
“It’s just a jump to the left…”*
But it's not just about security - it's about working smarter and more efficiently. When discussing security in the software development life cycle (SDLC,), you often find people talking of “shifting left” to improve security posture. What this typically means is that security concerns are addressed earlier in the SDLC - i.e. within your CICD pipelines. As the recent DORA DevOps research studies have shown, secure software supply chains are not only essential for all the reasons above but also:
“In addition to exhibiting high delivery and operational performance, teams who integrate security practices throughout their development process are 1.6 times more likely to meet or exceed their organizational goals. Development teams that embrace security see significant value-driven to the business.”
- DORA State of DevOps Report 2021
As befits a world-leading hyperscaler, Google has long since been looking into solutions for securing all aspects of their own internal supply chains and has been sharing some of these patterns and practices and developing tooling to assist customers with their security posture. Tools like Binary Authorization have been adapted from processes used to secure their internal “Borg'' container platform. Partly inspired by these internal approaches to securing software supply chains, and in collaboration with bodies such as the CNCF, they have proposed a solution in the form of Supply chain Levels for Software Artifacts, or SLSA (pronounced “salsa” - ahh, now you get it 😉 ). SLSA is (deep breath): “an end-to-end framework for ensuring the integrity of software artifacts throughout the software supply chain”.
So - why a framework? And what's with the “levels”? Well, really it’s an acknowledgment that software supply chains are usually complex multidimensional beasts with many dependencies. Identifying each piece of the puzzle and securing it, can be a significant undertaking - potentially taking a considerable amount of time and effort. The idea is that each of these levels currently includes best practice recommendations in each of the main supply chain concerns. Organisations can use these levels to first baseline (i.e. achieve SLSA level 1), and then progressively enhance their security posture. This is achieved by increasing the levels of SLSA compliance, via the application of more advanced, secure practices throughout their supply chains.
SLSA Levels (credit: - Google Cloud security blog 21/06/21)
As an example, maybe the simplest baseline principle is using a scripted process for performing builds (sounds obvious huh?) - this would be one of the required actions (but not the only one) for an SLSA level 1 build process. Further to this, your build process would need to generate some kind of provenance i.e. some metadata about how the artifact was produced. As we progress through the levels, we encounter additional requirements:
- SLSA level 2 requires the provenance record associated with the artifact to be tamper-proof. Your build service would need to be fully managed and able to generate that provenance directly.
- SLSA level 3 requires all build steps to be represented as code and executed in ephemeral build environments, that cannot be re-used or overlap with other processes.
- SLSA level 4 (currently the highest level) mandates hermetic builds (i.e. builds with a complete provenance listing for all components and tools used), and two-person reviews for each change.
Put together, these levels can be used by organisations as guidelines, to help progressively enhance their software supply chain security strength based on industry standards.
It’s important to note that these are early days in SLSA’s development, and certainly the higher level requirements will evolve and increase. As the programme develops, it is envisaged that we will progress from having a series of recommendations around behaviours, to tools and processes that will be available to enforce such behaviours. However, software supply chain security is now so important that achieving the baseline levels of SLSA compliance should be considered an essential priority for anyone producing software - now, not sometime in the future. Indeed, SLSA based standards are already having an impact on the industry. As examples:
- As of Release 1.23, the Kubernetes release engineering process is SLSA level 1 compliant (higher levels are to be targeted in subsequent releases)
- Cloud Build (GCP’s managed serverless CICD service) is SLSA level 1 compliant
- Google’s “distroless” base container images are now SLSA level 2 compliant
So some of the tools you may already be using are already SLSA compliant to some degree - however, it's worth remembering that a chain is only as strong as its weakest link - so it's on you to ensure that every step is secure.
The Safety Dance
Fortunately, if you’re already using GCP services to deploy containerised workloads, you have access to some of the solutions already. Alongside Google’s zero trust “BeyondProd” recommendations for cloud-native application delivery, Google Cloud is developing a number of tools to help you implement secure CICD practices, based on those it has been using internally for years. Chief amongst these is the aforementioned Binary Authorization (BinAuthz), now available on both Anthos and standalone GKE Kubernetes clusters, as well as Cloud Run (Knative-based serverless).
BinAuthz provides software supply-chain security for container-based applications by supporting policies, rules, notes, attestations, attestors, and signers that can be used and updated as part of the CICD lifecycle. At deployment time, the Binary Authorization policy enforcer can check the signed attestations for a container to ensure its provenance (i.e. that it was built using the required steps and tools) before allowing deployment to the runtime to proceed.
In a secured pipeline example as above:
- Code to build the container image is pushed to a git repository.
- A CI tool (like Cloud Build) builds and tests the container.
- The build pushes the container image to a container registry (like Artifact Registry) that stores your built images.
- A key management service (e.g. Cloud KMS) uses a cryptographic key pair to sign the container image. The resulting signature is then stored in a newly created and signed attestation.
- At deployment time, the attestor verifies the attestation using the public key from the key pair. Binary Authorization enforces the policy by requiring signed attestations to deploy the container image.
In this way, we have an automated, auditable process, whereby only container images produced and tested within our pipeline will be allowed for deployment on our cluster - so we’re well on our way to some degree of SLSA compliance for both build and provenance.
Of course, that’s just part of the puzzle - for instance, we could also:
- Get insights into software dependencies using sources like Open Source Insights, or use automated tooling like dependabot or renovate to keep these dependencies updated.
- Use Artifact Registry and the Container Analysis API to scan artifacts and add further signed attestations, and ensure our policies check for these too.
- Use open-source tools such as Kritis and Voucher to provide attestations for other processes, and integrate with other 3rd party tools.
- Use Cloud Build private pools to ensure our CI pipeline remains within our customer network.
- Use a Policy Controller to enforce container guardrails both within our CI pipeline and at runtime in the cluster.
These are just some of the latest features made available by Google (and others) in this area, and there will be more to come. At CTS, we’re all too well aware of how important this is going to be for our customers, and how we can use these tools and practices to help them. Maybe now really is the time to face the music - and dance, SLSA style.
If you want to know more about how to secure your software supply chains with GCP and open source solutions with CTS, please contact us.
*Please note that the “Time Warp” is not officially a salsa-based dance.