Using Kubernetes for CI Build Jobs and Generic Processing Tasks – part 1

Joseph Sibony
Joseph Sibony reading time: 10 minutes
July 22, 2021

You probably heard about Kubernetes and according to the CNCF 2020 survey you are most likely using or in the process of moving production workloads to Kubernetes. Kubernetes is great for running your production workloads (as well as your staging/dev environments), but it is also very useful for miscellaneous processing tasks and build jobs. Having the raw, on-demand computing power of a Kubernetes cluster can be extremely useful for any DevOps engineer to have in his/her toolbelt.

This is the first in a 2-part series that will show you how you can utilize Kubernetes for these use-cases. For the second part, click here.

Also, if you’d like, here’s some light reading on the topic of Kubernetes: Docker vs Kubernetes – Should We Really Compare?

Our Goals

Before we dive into the technical details, what is it actually that we want to achieve? And why?

Let’s first understand the kind of processing jobs we would like to run. The most simple example is a build job that needs to compile code for different OS architectures. This kind of job can easily be parallelized by running each OS architecture compilation in parallel. This example can be expanded to other processing jobs which a DevOps engineer might encounter, like generic data processing tasks or any time-consuming processing job. 

In my previous posts, I wrote about simple mostly single-machine parallelization solutions and about using features of CI systems for parallelization. While those solutions are very suitable for many CI requirements, they are geared toward the context of a CI system and might be limited in some ways. The Kubernetes solution, on the other hand, is not limited at all and can allow full flexibility both in terms of the available features and in terms of the available computing power. That makes it a useful tool that is worth investing some time into to get experience with and having it available when you need it. 

In this post, I will review the basic building blocks of Kubernetes – the Docker container and the Kubernetes pod. In my next post, I will demonstrate more robust and feature-rich abstractions using Kubernetes Job objects. The posts do not assume any prior knowledge of Docker or Kubernetes but some general proficiency and knowledge of Linux, Git and Bash scripting is expected.

Running the Example Code

The following prerequisites are required to run the code, it’s recommended to run the commands yourself as you follow along the post: 

All the commands shown in the post should run from the root of the repository. So, after you forked the code repository you should clone it and open a terminal at the code repository root directory. 

To make the code samples easier to run, set the following environment variables in your shell (replace YourGitHub* values with your relevant details): 

export GITHUB_TOKEN=YourGitHhubPersonalAccessToken
export GITHUB_USER=YourGitHubUserName

Also, set the following environment variable which will make the following code samples shorter and easier to follow:

export BUILDER_IMAGE=ghcr.io/orihoch/k8s-ci-processing-jobs-builder

The Basic Unit of Work – A Docker Image/Container

Docker

The basic unit of work in Kubernetes is a Docker image which is used to run a container on our cluster. You will need to define an image for each type of job that you will want to run. This could be anything from a simple build job, like in this example, to more complex testing/integration/deployment tasks.

For our examples, we will use a simple Go build job that builds a Hello World binary for various operating system architectures. Note that I picked Go for this example, but it could be C++ or any other build job or processing task. The Docker image is defined using a Dockerfile, this is where you install the required system dependencies for your task. For this example, I use a base image of golang which already contains all the required dependencies. I then add the golang source code which the task will compile and the entrypoint.sh script which is a Bash script the handles the compilation:

# builder/Dockerfile
FROM golang:1.16
RUN apt-get update && apt-get install -y jq
COPY main.go src/
COPY entrypoint.sh ./
ENV GOOS=linux
ENV GOARCH=amd64
ENTRYPOINT ["./entrypoint.sh"]

The entrypoint.sh script handles two functions – listing the available OS architectures and the build job itself:

#!/usr/bin/env bash 

# builder/entrypoint.sh
if [ "${1}" == "--list" ]; then
  exec go tool dist list
else
  export GOOS="$(echo "${1}" | cut -d"/" -f1)" &&\
  export GOARCH="$(echo "${1}" | cut -d"/" -f2)" &&\
  REPO_USER="${2}" &&\
  TAG="${3}" &&\
  if [ "${GOOS}" == "windows" ]; then EXT=.exe; fi &&\
  echo GOOS=$GOOS GOARCH=$GOARCH REPO_USER=$REPO_USER TAG=$TAG EXT=$EXT &&\
  UPLOAD_URL="$(curl -s -u "${REPO_USER}:${TOKEN}" -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/${REPO_USER}/k8s-ci-processing-jobs-examples/releases/tags/${TAG}" | jq -r .upload_url | cut -d'{' -f1)" &&\
  echo UPLOAD_URL=$UPLOAD_URL &&\
  echo Compiling... &&\
  go build -o bin/main src/main.go &&\
  echo OK &&\
  echo Publishing... &&\
  curl -s -u "${REPO_USER}:${TOKEN}" -X POST -H "Accept: application/vnd.github.v3+json" \
    -H "Content-Type: application/x-executable" --data-binary @bin/main \
    "${UPLOAD_URL}?name=hello-world-${GOOS}-${GOARCH}${EXT}" &&\
  echo OK
fi

 You can review all the image files in the code repository under the builder/ directory. 

The image can run locally to test its functionality and make sure the script works for you before deploying to Kubernetes. The image is published to a GitHub Docker registry (you can see the CI script which does that here). 

Run the following command  to list the available OS architectures which the build script supports: 

docker run $BUILDER_IMAGE --list

The docker run command creates a container based on the image defined in $BUILDER_IMAGE (ghcr.io/orihoch/k8s-ci-processing-jobs-builder). The –list argument is passed to the container which is handled by the image to list the available OS architectures (You can see the specific code which handles that here). 

To build and publish a binary you will first need to publish a release in your GitHub repository (this is the fork you made from the k8s-ci-processing-jobs-builder repository), the following examples assume you published a release names “v0.0.1” 

The following command will run a few builds sequentially with different OS architectures: 

docker run -e TOKEN=$GITHUB_TOKEN $BUILDER_IMAGE linux/386 $GITHUB_USER v0.0.1 &&\
docker run -e TOKEN=$GITHUB_TOKEN $BUILDER_IMAGE linux/amd64 $GITHUB_USER v0.0.1 &&\
docker run -e TOKEN=$GITHUB_TOKEN $BUILDER_IMAGE darwin/arm64 $GITHUB_USER v0.0.1 &&\
docker run -e TOKEN=$GITHUB_TOKEN $BUILDER_IMAGE windows/arm $GITHUB_USER v0.0.1

Explanation of the arguments: 

  • -e TOKEN=$GITHUB: We pass an environment variable into the container containing your GitHub token, so that the builder script can add the compiled asset to the release
  • $BUILDER_IMAGE windows/arm $GITHUB_USER v0.0.1: The builder image followed by arguments specifying the OS architecture to compile and the github user and release name to add the asset to.

Once the containers finished running, you should be able to see the published artifacts in your GitHub repository release.

Deploying the Container as a Pod Using Kubectl

Once you have the Docker image you need a way to run it on the cluster. The most basic building block of any workload in a Kubernetes cluster is a pod. Using the kubectl CLI you can quickly deploy the container on the cluster. 

First, create a new release on your GitHub repository, so we can see the published artifacts there. The following examples assume you named it as “v0.0.2” 

The Kubectl run command allows you to quickly deploy pods to the cluster. Run the following command to deploy a single pod on the cluster: 

kubectl run builder-linux-386 --restart=Never --env=TOKEN=$GITHUB_TOKEN \
    --image=$BUILDER_IMAGE -- linux/386 $GITHUB_USER v0.0.2

Let’s see the meaning of all the arguments in that command: 

  • builder-linux-386: The name of the pod that will be created, names must be unique – there can’t be two pods with the same name in the same namespace.
  • –restart=Never: Kubernetes by default restarts pods aggressively, for a build script that needs to only run once, we need to disable that functionality.
  • –env=TOKEN=$GITHUB_TOKEN: Add an environment variable that will be available inside the container
  • –image=$BUILDER_IMAGE: The image we want to deploy
  • — linux/386 $GITHUB_USER v0.0.2: All arguments after are passed as arguments to the image. These are the same arguments we used for the docker run command.

Run the following commands to start a two more pods which compile for different OS architectures: 

kubectl run builder-linux-amd64 --restart=Never --env=TOKEN=$GITHUB_TOKEN \
    --image=$BUILDER_IMAGE -- linux/amd64 $GITHUB_USER v0.0.2
kubectl run builder-darwin-arm64 --restart=Never --env=TOKEN=$GITHUB_TOKEN \
    --image=$BUILDER_IMAGE -- darwin/arm64 $GITHUB_USER v0.0.2

We can check on the status of the created pods using the following command: 

kubectl get pods

You should see 3 pods running in parallel and transitioning between statuses until they reach `Completed` status. You should then be able to see the binaries in your release on GitHub. 

You should always remember to delete pods when they finished running to clean up and prevent clutter in your cluster. This can be done using the following command: 

kubectl delete pods builder-linux-386 builder-linux-amd64 builder-darwin-arm64

What Happens When the Build Breaks? – Debugging Workloads

Occasionally, hopefully rarely, the build will break, let’s see how we can debug the pod running on Kubernetes. 

Let’s deploy a builder pod which will not work as it’s using an unsupported OS architecture 

kubectl run builder-linux-quantum --restart=Never --env=TOKEN=$GITHUB_TOKEN \
    --image=$BUILDER_IMAGE -- linux/quantum $GITHUB_USER v0.0.2

Check the pod status: 

kubectl get pod builder-linux-quantum

You will see the status as `Error`. You can get the log of the builder job to get more details:

kubectl logs builder-linux-quantum

You will see an error that linux/quantum architecture is not supported (as expected, compiling to quantum computing architecture is indeed not supported yet). 

Sometimes the log messages are not enough and you will want to run some code on the pod itself, this can easily be done by overriding the default entrypoint and running an interactive shell instead:

kubectl run builder-linux-quantum-shell --restart=Never --env=TOKEN=$GITHUB_TOKEN \
    --image=$BUILDER_IMAGE -it --command -- bash

You should now be in an interactive shell session inside the pod. You can now run the pod’s entrypoint script to debug it:

./entrypoint.sh linux/quantum YourGitHubUserName v0.0.2

Remember to clean up the created pods to prevent clutter in your cluster using the following command:

kubectl delete pods builder-linux-quantum builder-linux-quantum-shell

Summary

In this post, we witnessed how you can deploy processing workloads to Kubernetes. Depending on your skill and experience with Docker and Kubernetes, there might be a lot to take in. If you find that you are missing some more knowledge with these topics I recommend reading the official documentation, both Docker and Kubernetes have great documentation for all skill levels: 

While the basic building blocks we saw of container and pods are useful by themselves, there are limitations, mainly that it requires a lot of manual work – each pod has to be started by itself so if following the example job you want to build all 44 supported OS architectures, you will have to start 44 pods. You can, of course, do that quite easily using some Bash or Python scripting but then you also have to keep track of these jobs and retry failed jobs. Kubernetes provides several abstractions on top of the basic pod which can provide these sorts of features. In my next blog post, I will show how to expand on our example using the Kubernetes job object.

Joseph Sibony
Joseph Sibony reading time: 10 minutes minutes July 22, 2021
July 22, 2021

Table of Contents

Related Posts

10 minutes Platform Engineering vs DevOps: A Comprehensive Comparison

Read More  

10 minutes These 4 advantages of caching are a game-changer for development projects

Read More  

10 minutes Build Cache Today and Tomorrow

Read More