Kubernetes Part 2: Continuous Deployment and GitOps
Second in this series, this blog covers how to make your application deployments reproducible and auditable by focusing on GitOps and CICD in a Kubernetes world.
See more articles
Platform Engineer at CTS
The first post in this series discussed securing your cluster and the workloads running inside of it. In this post, I will focus on GitOps and CICD in a Kubernetes world, utilizing GCP managed services and 3rd party open-source tools which make life easier. As with the last post, we’ll start with a question
Are my application deployments reproducible and auditable?
If the answer is “no” then you should consider using GitOps as a method of deploying your applications to Kubernetes.
GitOps, in a nutshell, means that you use Git as your source of truth for Kubernetes rollouts and deploying these using some kind of CD tooling or automation. A major benefit of this workflow, and one of the key factors in its development, is that it is good for developers as it allows them to follow the same “PR, review, merge, deploy” flow that they use for their code when developing Kubernetes manifests and infrastructure (truly facilitating DevOps!).
This way of working also ensures that all code is auditable, as every deploy can be attributed to a Git commit, allowing you to find the root cause of any problems much faster and roll back easily with a simple revert. This is also key to the reproducibility of this workflow - you can easily select a commit and deploy it to reproduce the exact state of the infrastructure at that commit.
Continuous Integration on GCP
The first stage of deploying to Kubernetes can be referred to as Continuous Integration (CI). This stage will be where you actually build your Docker images from your code. This stage can also involve testing the code prior to building and scanning the Docker image for vulnerabilities, but this was covered in my last post on security in Kubernetes so I will not focus on that here.
A key component in any CI pipeline in GCP is Cloud Build. This service allows you to use docker images to run build steps in a pipeline and have it be triggered by changes to a Git repo which is really important in facilitating GitOps. These images include a list of supported builder images including Docker, Gcloud, git, and npm. You can also use a host of community cloudbuilders which have been written by users of Cloud Build who noticed a need for them. These images need to be built by you as they aren’t officially managed.
A pipeline in Cloud Build can be defined in a couple of ways. The first of which is a “cloudbuild.yaml” file. This file contains steps, environment variables, substitutions, tags etc. to define a pipeline in a simple easy-to-read configuration.
The most common Kubernetes Continuous Integration pipeline in Cloud Build looks like the following:
The first part of this config defines the step which would build your application code into its Docker image. Some key things to note are the variables (“PROJECT_ID” and “SHORT_SHA”) used here which are 2 of many built-in variables which you can just use without any extra configuration or substitution. “PROJECT_ID” will use the ID of the project that the Cloud Build is running in and the “SHORT_SHA” will get the short commit hash for the Git commit that triggered the build.
Next, the “images” section takes a list of images that have been built in the pipeline and pushes them to the registry. This can be a Docker registry or Google’s Container Registry (GCR) - used here - which is an image registry using Google Cloud Storage (GCS) buckets as a backend. Using GCR is a good choice because access can be managed with Google Cloud’s Identity & Access Management (IAM) (rather than Docker registry credentials) which is more secure as fewer secrets are around to be exposed.
This is all well and good but, now we have built a new image, we need to somehow get the new tag into our manifests. This is where Kustomize comes in…
Kustomize is a Kubernetes Special Interest Group (SIG), which is now integrated into “kubectl”, that allows you to customize your manifests without the need for templating. You can use Kustomize to define overlays based on differences in environment and region, for example, if you have a multi-env, global deployment. This is done by defining a base set of manifests that can be applied to all clusters, regardless of environment or region, and then either patching or merging these manifests with other, more specific, ones which allows for less repeatability in your YAML.
For CI, though, we will just be focusing on the “images” section of Kustomize’s features. This allows you to edit the image that a deployment uses declaratively with a simple command. The declarative aspect of this is important as this needs to be reflected in Git to follow GitOps - editing the deployment itself on the fly using “kubectl” wouldn’t work.
In this example, we are assuming your manifests live in the same repo as your application. This is for simplicity but it is recommended that you separate your config from your code wherever possible.
For this example, we will use a simple deployment file living in our repo at “manifests/deployment.yaml”. Note the image name is simply “app” here:
The Kustomize config for this will also be simple and will live at “manifests/kustomization.yaml”:
This means that when Kustomize runs, it will include the resources defined in “deployment.yaml”.
The “cloudbuild.yaml” should now look like:
This now has a new step which uses the community cloudbuilder for Kustomize to set all instances of an image named “app” to our newly built image and tag using the Kustomize configuration.
This would lead to the “kustomization.yaml” becoming (with the variables substituted for the actual project ID and short commit hash of your image):
The next step of your “cloudbuild.yaml” should use the Git cloudbuilder image and push this change to your repository ready for Continuous Deployment to take over...
Continuous Deployment (CD) is the second stage of CI/CD. It involves actually deploying your application onto the infrastructure it is to run on in an automated fashion (hence continuous).
For Kubernetes and GitOps, my tool of choice for CD is ArgoCD.
This is a Continuous Deployment application for Kubernetes built with GitOps in mind. Its job is to watch your Git repository and ensure that your cluster’s state matches the state defined in your manifests there. When an update is pushed to your manifests, ArgoCD will pick those up and apply them to the cluster you tell it to.
ArgoCD also runs on the same infrastructure as it deploys to. It is a great example of a microservices application and can be deployed completely declaratively - it can even manage the continuous deployment of itself!
On top of this, it supports the use of Kustomize, Helm and native Kubernetes to deploy meaning it is a wide-ranging solution fit for most purposes. There’s no reason not to start using it to aid you on your journey to GitOps!
This blog just scratched the surface of what you can do with the tools discussed. Kustomize, Cloud Build and ArgoCD are expansive and feature-rich, and the best way to figure out all the things you can do with them is to try them and start customising your pipeline to work for you.
Getting your pipeline right early on is a great way to free up the development of your application and manifests. You can stop worrying about the steps you need to do manually to turn your code into a live application - no more Docker building locally or complex manual Sed commands to update manifests - just push to Git, wait and see it live in minutes!
Now that we have GitOps underway, the next blog in this series will discuss secret management on Kubernetes without compromising your GitOps flow or your secret data.
Kubernetes Part 3: Secrets
In the two previous posts in this series, I have discussed security in GKE - starting your...
Kubernetes Part 1: Security
Using services like Google Kubernetes Engine (GKE), you can greatly improve your ability to get...