About a month ago someone posted a link to their blog article on r/self-hosted about setting up your own self-hosted Kubernetes GitHub Runners. Around this time I had just gotten my GitHub Enterprise instance working with actions and such so I was quite excited to see this.

Originally I had attempted to install a self-hosted GitHub runner on one of my servers, but because I was missing node it didn't run properly. I then came across the source which GitHub provides on setting up their runners which they deploy to users of GitHub.com. However these are full on Ubuntu environments with everything you could think of installed within in. If I recall they were about 80-90GBs in size. Nonetheless I ended up setting up a couple of them as VMs. I quickly realized maintaining and keeping them updated would be another task I really didn't have the time for. This method didn't really make sense for me especially since most of the stuff I was doing with GitHub Actions was being performed in Docker.

Thankfully this kind fellow put together this guide which walks you through setting up GitHub runners in a Kubernetes environment. I'm a completely newb to Kubernetes so this was an excellent opportunity for me to learn some more! While I follow most of the guide there were a couple things I did differently. In this article I'll go from nothing to running runners in your Kubernetes cluster!

I opted to go with k3s because it's something I am familiar with setting up and using. It's really easy to install and setup! I first setup 3 Ubuntu 20.04 VMs on my Proxmox server. I allocated 2 cores, 40GBs of disk space and 4GBs of RAM to what would be my master node. My other 2 nodes consisted of 8 cores, 16GBs of RAM and 250GBs of disk space each. This may be overkills, but I had the resources to spare on the system. Make sure you disable swap on your systems. I did this by editing the /etc/fstab file and commenting out the line for swap.

Once each VM was setup, I made to run apt update && apt upgrade on each one to ensure everything was as up to date as possible. I also like to use dpkg-reconfigure tzdata to set the timezone for each VM to my timezone.

Next get Docker installed on your master and worker nodes.

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
sudo apt install docker-ce
sudo systemctl status docker
sudo usermod -aG docker $LINUX_USERNAME

I personally use PostgreSQL with k3s. You can choose to use whatever option you'd like. There's a few to pick from. Here's a couple quick commands I used to setup my PostgreSQL user and database:

CREATE USER k3s WITH ENCRYPTED PASSWORD '$PASSWORD';
CREATE DATABASE k3s;
GRANT ALL PRIVILEGES ON DATABASE k3s TO k3s;

Next install k3s on your master node:

curl -sfL https://get.k3s.io | sh -s - --datastore-endpoint 'postgres://$USERNAME:$PASSWORD@ip.add.ress:5432/k3s?sslmode=disable' --write-kubeconfig-mode 644 --docker --disable traefik --disable servicelb

This will install k3s in master node mode, uses Docker instead of containerd, and disables Traefik and the service load balancer.

Grab your token which will be needed to set up the worker nodes. You can find the token at /var/lib/rancher/k3s/server/node-token.

On your work nodes, get k3s installed in agent mode by using these commands:

export K3S_URL=https://master-node-ip-address-or-url:6443
export K3S_TOKEN=K1009809sad1cf2317376e1fc892a7f48983939442479i987sa89ds::server:e28d3875948350349283927498324
curl -fsL https://get.k3s.io | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -s agent --docker --disable traefik --disable servicelb

This will set your master node URL and token to a variable and then utilize those variables to install k3s. You'll notice I specify -s agent which tells the install to install k3s in agent mode. Again I disable Traefik and the service load balancer. Given that the GitHub runners don't need to get incoming traffic I found that have Traefik and the service load balancer unnecessary.

If everything went well, you can run kubectl get nodes from your master node and it should show your 3 nodes:

jimmy@kubemaster-runners-octocat-ninja:~$ kubectl get nodes
NAME                               STATUS   ROLES    AGE     VERSION
kubemaster-runners-octocat-ninja   Ready    master   6d17h   v1.18.9+k3s1
kubenode1-runners-octocat-ninja    Ready    <none>   6d17h   v1.18.9+k3s1
kubenode2-runners-octocat-ninja    Ready    <none>   6d16h   v1.18.9+k3s1

I also like to run this command to ensure that no jobs are scheduled on my master node, it's not required though:

kubectl taint node $masterNode k3s-controlplane=true:NoSchedule

This part is also not required but I'm a Kubernetes newbie so having a GUI is helpful. First install helm3:

curl -O https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
bash ./get-helm-3 

You can confirm your helm version by using helm version. Next we need to add the Rancher charts repository:

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

This will install the stable version of the charts, but you can do latest as well. Next create a namespace for Rancher:

kubectl create namespace cattle-system

Next we'll install Rancher using this command:

helm install rancher rancher-latest/rancher \
    --namespace cattle-system \
    --set hostname=rancher.octocat.ninja \
    --set tls=external

You can use kubectl -n cattle-system rollout status deploy/rancher to keep an eye on the deployment. I think it took ~2 minutes, though probably less for it to install for me. Once that is done, I assigned an external IP to the rancher service:

kubectl patch svc rancher -p '{"spec":{"externalIPs":["192.168.1.5"]}}' -n cattle-system

Now you'll obviously want to make sure whatever IP you assign is routed to the system. Next if you have a domain pointing to the system you can use that to access Rancher or you can use the IP. Once you're in Rancher, I recommend creating a new project, I made one called 'GitHub Runners'. Next create a new namespace called docker-in-docker. You can do this from the command line or from within Rancher.

kubectl create ns docker-in-docker

If you did it on the command line, you can use Rancher to move the new namespace into your Project. Here's what my project looks like (don't worry about the other namespace for now):

Rancher - GitHub Runners Project

Next we're going to create a PersistentVolumeClaim. This can be done on the command line or in Rancher. I opted to go the Rancher route since it was easier. From the Projects/Namespaces page, click on the title of the project:

Rancher - GitHub Runners - Click Project Title

From this page click on the 'Import YAML' button:

Rancher - GitHub Runners - Click Import YAML

Paste in the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dind
  namespace: docker-in-docker
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi

Make sure you've selected the 'Namespace: Import all resources into a specific namespace' radio button, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

Rancher - Github Runners - Import YAML

You can adjust the storage size to whatever you feel comfortable with. As given though it will allow 50Gi of space for your Docker in Docker pod. You can always enter into the container to clear out unused Docker images and such.

Hit the Import button! On the 'Volumes' tab you should now see your volume!

Rancher - GitHub Runners - Volumes

Next we'll create a deployment for Docker in Docker. Again, I used the 'Import YAML' button for this. Make sure you have the 'Namespace: Import all resources into a specific namespace' radio button checked, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dind
  namespace: docker-in-docker
spec:
  replicas: 1
  selector:
    matchLabels:
      workload: deployment-docker-in-docker-dind
  template:
    metadata:
      labels:
        workload: deployment-docker-in-docker-dind
    spec:
      containers:
      - command:
        - dockerd
        - --host=unix:///var/run/docker.sock
        - --host=tcp://0.0.0.0:2376
        env:
        - name: DOCKER_TLS_CERTDIR
        image: docker:19.03.12-dind
        imagePullPolicy: IfNotPresent
        name: dind
        resources: {}
        securityContext:
          privileged: true
          readOnlyRootFilesystem: false
        stdin: true
        tty: true
        volumeMounts:
        - mountPath: /var/lib/docker
          name: dind-storage
      volumes:
      - name: dind-storage
        persistentVolumeClaim:
          claimName: dind

In a nutshell this will setup a pod with a container that runs the Docker in Docker image. It tells the dockerd daemon inside the container where to put the socket file and to listen on TCP 0.0.0.0 on port 2376. Also by specifying DOCKER_TLS_CERTDIR as an empty environment variable we tell it not to use TLS. I along with the author from the blog article have not specified any resources. As this server pretty much only handles my GitHub Runners and one other small Kubernetes cluster I didn't feel the need to constrain my pods. You're more than welcome to set up resources, but it's not something I cover here. At the bottom of the above YAML you'll notice I also specify my persistent volume claim I previously made. This allows this deployment to utilize that volume. Hit Import and you should see your deployment show up in the Rancher interface!

Rancher - GitHub Runners - DIND Deployments

Next I build a Docker image which contained the GitHub Runner application itself. You can use the original blog authors Docker image, or you can build one yourself and deploy it to your own private registry or Docker Hub. My Dockerfile is as follows:

FROM debian:buster-slim

ENV GITHUB_PAT ""
ENV GITHUB_OWNER ""
ENV GITHUB_REPOSITORY ""
ENV RUNNER_WORKDIR "_work"
ENV RUNNER_LABELS ""

RUN apt-get update \
    && apt-get install -y \
        curl \
        sudo \
        git \
        jq \
        iputils-ping \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* \
    && useradd -m github \
    && usermod -aG sudo github \
    && echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers \
    && curl https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz --output docker-19.03.9.tgz \
    && tar xvfz docker-19.03.9.tgz \
    && cp docker/* /usr/bin/

USER github
WORKDIR /home/github

RUN GITHUB_RUNNER_VERSION=$(curl --silent "https://api.github.com/repos/actions/runner/releases/latest" | jq -r '.tag_name[1:]') \
    && curl -Ls https://github.com/actions/runner/releases/download/v${GITHUB_RUNNER_VERSION}/actions-runner-linux-x64-${GITHUB_RUNNER_VERSION}.tar.gz | tar xz \
    && sudo ./bin/installdependencies.sh

COPY --chown=github:github entrypoint.sh ./entrypoint.sh
RUN sudo chmod u+x ./entrypoint.sh

ENTRYPOINT ["/home/github/entrypoint.sh"]

A couple things to note here. I also install Docker since we'll be using this to build and publish our own Docker images via GitHub actions. Also note that this should automatically fetch the latest version of the GitHub Runners and use them. I believe the runner daemon itself checks for updates every few days. I had to modify my entrypoint.sh slightly from the default since I am using GitHub Enterprise. Once my image was built, I pushed it to my private registry server.

Next we'll create a new namespace for our runners. This can be done on the command line via:

kubectl create ns github-actions

Again, I recommend putting this new namespace in your GitHub Runners project in Rancher. Organization is awesome! Once you've done that we'll need to create a new deployment for the runner(s)! I again utilized Rancher and the wonderful 'Import YAML' button to do this. This time however, make sure under the 'Namespace' dropdown menu that you select the 'github-actions' option. Make sure you set the right Docker image as well (image: repository/github-actions-runner:latest is just a place-holder below)!

apiVersion: apps/v1
kind: Deployment
metadata:
  name: github-runner
  namespace: github-actions
  labels:
    app: github-runner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: github-runner
  template:
    metadata:
      labels:
        app: github-runner
    spec:
      containers:
      - name: github-runner
        image: repository/github-actions-runner:latest
        env:
        - name: DOCKER_HOST
          value: tcp://dind.docker-in-docker:2376
        - name: GITHUB_OWNER
          value: $GITHUB_USERNAME
        - name: GITHUB_REPOSITORY
          value: $GITHUB_REPOSITORY_NAME
        - name: GITHUB_PAT
          valueFrom:
            secretKeyRef:
              name: github-actions-token
              key: pat

Replace $GITHUB_USERNAME and $GITHUB_REPOSITORY_NAME with your information.

Create a Personal Access Token for yourself within GitHub. This option can be found at Settings > Developer Settings > Personal Access Tokens. I just checked off 'Repo' (which also select it's sub-options too), then click Generate Token.

GitHub - Personal Access Token

You'll get a string of characters which is your token. Copy this, and we'll use it to create a secret within Kubernetes. You can use the Rancher UI to do this, with our favorite 'Import YAML' button! Make sure the 'github-actions' namespace is selected!

apiVersion: v1
stringData:
  pat: $YOUR_GITHUB_PERSONAL_ACCESS_TOKEN
kind: Secret
metadata:
  name: github-actions-token
  namespace: github-actions
type: Opaque

Once you're done your new deployment should show up in the 'github-actions' namespace area!

Rancher - GitHub Runners - Project

The runner should also automatically show up under your repositories settings > action page!

GitHub > Settings > Actions

I've setup 4-5 runners for the time being, but I know I will have a lot more for my other projects! One thing I do wish was that runners weren't repository specific, or that they could just be deployed whenever an Action called for them. It's seems kind of silly to have to have at least one dedicated runner per repository. You'd think a runner could handle many repositories. For the time being though, this is an excellent solution for self-hosters who use GitHub Actions!

Resources