Currently OpenVSCode Server is released with support for x86_64 architectures only. Thankfully though it's quite easy to get it going on a Rasberry Pi (4b in my case).

  1. Login to your Rasberry Pi. I created a bin folder in my home directory to store Node.js, but anywhere should work.
  2. Download the Node.js binaries (wget This fetches the latest LTS version.
  3. Uncompress the archive - tar xvf node-v14.18.1-linux-arm64.tar.xz.
  4. Next I updated $PATH so it would make calling node and npm easier. I did this by editing my ~/.bashrc file - export PATH="/home/jimmy/bin/node-v14.18.1-linux-arm64/bin:$PATH".
  5. Next I downloaded the latest OpenVSCode Server release - wget and uncompressed it - tar zxvf openvscode-server-v1.61.0-linux-x64.tar.gz`.
  6. You can either remove the node_modules folder inside the openvscode-server-v1.61.0-linux-x64 folder or just move it aside. I moved it aside - mv node_modules/ node_modules.old/.
  7. Now run npm install and it will reinstall the necessary Node.js modules. This may take a minute or two.
  8. Once the modules have been installed run - node ./out/server.js.

That's it! You should see something like:

[jimmy@my.raspberry.pi ~/openvscode-server/openvscode-server-v1.61.0-linux-x64]$ node ./out/server.js
[main 2021-10-22T19:50:06.927Z] Web UI available at http://localhost:3000

I don't know if this is the best way to get OpenVSCode Server running on a Raspberry Pi, but it was the quickest and easiest way I could find! Ideally the Gitpod team updates their GitHub workflow (assuming that's what they're using to create releases) to build a release for arm64, I can't imagine it would be too difficult.

This guide will show you how to fetch a Let's Encrypt SSL certificate, run Gitpod as a Docker container using Docker Compose and configure it so it works with Traefik v2.x.

Previously in order to run Gitpod you needed to use either Google Cloud Platform which I found to be prohibitively expensive or use your own vanilla Kubernetes setup. Going the vanilla Kubernetes route was fun and a great learning experience but it required running another server that used more eletricity and generated more heat to be running in my house. Thankfully Gitpod can now be run in as a single Docker container!

The information and directions I found were pretty good and got me started however there was some changes in how I deployed it. I figured it would be worth sharing my expenience in running Gitpod via a single Docker container with you all.

For what it's worth I deployed this on a Dell R620 with 2x Intel Xeon E5-2640s, 220GBs RAM, a 500GB SSD boot drive and 5.6TB RAID5 for data. Obviously you probably don't need all that if you're just running Gitpod but I am running about ~70 other Docker containers with other services.

First clone the following Git repository as such:

git clone

This was a great starting point for me running Gitpod via Docker, however there was a few files I had to update. The first one was the file. I use NS1 to manage the DNS for my Gitpod domain. My file looks like this:

set -euox


mkdir -p $WORKDIR

sudo docker run -it --rm --name certbot \
    -v $WORKDIR/etc:/etc/letsencrypt \
    -v $WORKDIR/var:/var/lib/letsencrypt \
    -v $(pwd)/secrets/nsone.ini:/etc/nsone.ini:ro \
        certbot/dns-nsone certonly \
            -v \
            --agree-tos --no-eff-email \
            --email $EMAIL \
            --dns-nsone \
            --dns-nsone-credentials /etc/nsone.ini \
            --dns-nsone-propagation-seconds 30 \
            -d \
            -d \* \
            -d \*

sudo find $WORKDIR/etc/live -name "*.pem" -exec sudo cp -v {} $(pwd)/certs \;
sudo chown -Rv $USER:$USER $(pwd)/certs
chmod -Rv 700 $(pwd)/certs

sudo rm -rfv $WORKDIR

openssl dhparam -out $(pwd)/certs/dhparams.pem 2048

You'll see I adjust the top bit to get rid of the DOMAIN variable and I swapped out the instances where it was used ($DOMAIN) to my actual domain. I also had to escape the * being used in the docker run command. I did this because the script wouldn't execute properly with the askerisks in place.

Next you'll likely need to create a secrets file to store your API key for the DNS manager you're using, in my case NS1. I created this in the secrets/ directory as nsone.ini. It looks something like this (obviously the key is made up here):

# NS1 API credentials used by Certbot
dns_nsone_api_key = dhsjkas8d7sd7f7s099n

Now we can generate our certificates by running this script. The script will create (and remove once done) the necessary DNS records and fetch an SSL certificate from Let's Encrypt.

./ <email>

Once this is done, you'll see the SSL certificate and files in your ./certs directory. There should be five (5) files:


Now you can either run the file as such:

./ <domain> <dns server>

or setup a new service in a docker-compose.yml file. The following is what I have in my Docker compose file, however I recommend you review it carefully so it works in your environment, don't just copy and paste.

    container_name: gitpod
      - traefik.http.routers.gitpod.rule=Host(``) || HostRegexp(``,`{subdomain:[A-Za-z0-9]+}`,`{subdomain:[A-Za-z0-9-_]+}`)
      - traefik.http.routers.gitpod.entrypoints=websecure
      - traefik.http.routers.gitpod.service=gitpod
      - traefik.http.routers.gitpod.tls=true
      - DNSSERVER=
      - /etc/localtime:/etc/localtime:ro
      - /run/containerd/containerd.sock:/run/containerd/containerd.sock
      - ${DOCKER_CONF_DIR}/gitpod/values:/values
      - ${DOCKER_CONF_DIR}/gitpod/certs:/certs
      - gitpod-docker:/var/gitpod/docker
      - gitpod-docker-registry:/var/gitpod/docker-registry
      - gitpod-minio:/var/gitpod/minio
      - gitpod-mysql:/var/gitpod/mysql
      - gitpod-workspaces:/var/gitpod/workspaces
      - production
      - traefik
    restart: unless-stopped
      - SYS_PTRACE
        driver: local
        driver: local
        driver: local
        driver: local
        driver: local
        driver: bridge
            - subnet:

You'll see I have ${DOCKER_CONF_DIR} there which is an environmental value (stored in my .env file) which points to /mnt/data/docker/config/gitpod/ on my server. You can setup something similar or hardcode it in your docker-compose file. Whatever you do make sure your copy those five (5) SSL files I previously mentioned into a new directory named certs within that directory. So for example I have my SSL certificate files located at /mnt/data/docker/config/gitpod/certs.

You'll also need to create another directory within the gitpod directory called values. So for me this is /mnt/data/docker/config/gitpod/values. Within there we're going to create three (3) more YAML files.


In the first file auth-providers.yaml, I have setup GitHub OAuth application details. In order to login to your Gitpod instance you'll need to set this file up. I have a GitHub Enterprise instance that I will be using for authentication. My auth-providers.yamlfile looks like this:

- id: "GitHub"
  host: ""
  type: "GitHub"
    clientId: "7d73h3b933829d9"
    clientSecret: "asu8a9sf9h89a9892n2n209201934b8334uhnraf987"
    callBackUrl: ""
    settingsUrl: ""
  description: ""
  icon: ""

The above is an example and will need to be adjusted accordingly. You can also set it up with, or your own self-hosted GitLab instance.

The next file minio-secrets.yaml needs to contain a username and password that will be used for the MinIO instance that will run within the k3s Kubernetes cluster in the Gitpod Docker container. I used the following command to generate some random strings:

openssl rand -hex 32

I ran that twice, once to create a string for the username and again for the password. Your minio-secrets.yaml should look like this:

  accessKey: 9d0d6aa1c9d9981fadc103a9e3a5bb56929df51de22439ab1410249c879429b1
  secretKey: 7f0f8ccd7219a1ef87cd30d33751469a491c54df062c8ca28517602576725276

obviously replacing those strings with whatever came from running that openssl command.

Now we need to create the mysql.yaml with the following contents:

  host: mysql
  port: 3306
  password: test

Once that is all set we can start up the Gitpod Docker container! I run:

docker-compose -p main up -d gitpod

You can adjust this command to your environment, for example you may not need the -p main bit. Once you run the above command it'll take a bit for things to get all setup and running. What I do (since I'm impatient :stuck_out_toungue_closed_eyes:) is run this command which will show you the status of the k3s Kubernetes cluster being setup in the Gitpod Docker container:

docker exec gitpod kubectl get all --all-namespaces

You should see output similar too:

NAMESPACE     NAME                                          READY   STATUS      RESTARTS   AGE
kube-system   pod/local-path-provisioner-7c458769fb-4zxlq   1/1     Running     0          13d
kube-system   pod/coredns-854c77959c-pqq94                  1/1     Running     0          13d
kube-system   pod/metrics-server-86cbb8457f-84kjc           1/1     Running     0          13d
default       pod/svclb-proxy-6gxms                         2/2     Running     0          13d
default       pod/blobserve-6bdd97d5dc-dzfnd                1/1     Running     0          13d
default       pod/dashboard-859c9bf868-mhvv7                1/1     Running     0          13d
default       pod/ws-manager-84486dc88c-p9dqt               1/1     Running     0          13d
default       pod/ws-scheduler-ff4d8d9dd-2wf28              1/1     Running     0          13d
default       pod/registry-facade-pvnpn                     1/1     Running     0          13d
default       pod/content-service-656fd85977-dkrvp          1/1     Running     0          13d
default       pod/theia-server-568fb48db5-fhknk             1/1     Running     0          13d
default       pod/registry-65ff9d5744-pxx96                 1/1     Running     0          13d
default       pod/minio-84fcc5d488-zcdj8                    1/1     Running     0          13d
default       pod/ws-proxy-5d5cd8fc64-tp97v                 1/1     Running     0          13d
default       pod/image-builder-7d97c4b4fb-wdb9l            2/2     Running     0          13d
default       pod/proxy-85b684df9b-fvl77                    1/1     Running     0          13d
default       pod/ws-daemon-xdnmg                           1/1     Running     0          13d
default       pod/node-daemon-rn5xs                         1/1     Running     0          13d
default       pod/messagebus-f98948794-gqcqp                1/1     Running     0          13d
default       pod/mysql-7cbb9c9586-l8slq                    1/1     Running     0          13d
default       pod/gitpod-helm-installer                     0/1     Completed   0          13d
default       pod/ws-manager-bridge-69856554ff-wxqw9        1/1     Running     0          13d
default       pod/server-84cf48b766-pt9gp                   1/1     Running     0          13d

NAMESPACE     NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                 AGE
default       service/kubernetes        ClusterIP       <none>        443/TCP                                 13d
kube-system   service/kube-dns          ClusterIP      <none>        53/UDP,53/TCP,9153/TCP                  13d
kube-system   service/metrics-server    ClusterIP     <none>        443/TCP                                 13d
default       service/dashboard         ClusterIP   <none>        3001/TCP                                13d
default       service/ws-manager        ClusterIP    <none>        8080/TCP                                13d
default       service/theia-server      ClusterIP   <none>        80/TCP                                  13d
default       service/server            ClusterIP   <none>        3000/TCP,9500/TCP                       13d
default       service/ws-proxy          ClusterIP   <none>        8080/TCP                                13d
default       service/mysql             ClusterIP      <none>        3306/TCP                                13d
default       service/minio             ClusterIP   <none>        9000/TCP                                13d
default       service/registry          ClusterIP    <none>        443/TCP                                 13d
default       service/registry-facade   ClusterIP   <none>        3000/TCP                                13d
default       service/messagebus        ClusterIP   <none>        5672/TCP,25672/TCP,4369/TCP,15672/TCP   13d
default       service/blobserve         ClusterIP      <none>        4000/TCP                                13d
default       service/content-service   ClusterIP    <none>        8080/TCP                                13d
default       service/image-builder     ClusterIP   <none>        8080/TCP                                13d
default       service/db                ClusterIP     <none>        3306/TCP                                13d
default       service/proxy             LoadBalancer   80:31895/TCP,443:30753/TCP              13d

default     daemonset.apps/svclb-proxy       1         1         1       1            1           <none>          13d
default     daemonset.apps/registry-facade   1         1         1       1            1           <none>          13d
default     daemonset.apps/ws-daemon         1         1         1       1            1           <none>          13d
default     daemonset.apps/node-daemon       1         1         1       1            1           <none>          13d

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           13d
kube-system   deployment.apps/coredns                  1/1     1            1           13d
kube-system   deployment.apps/metrics-server           1/1     1            1           13d
default       deployment.apps/blobserve                1/1     1            1           13d
default       deployment.apps/dashboard                1/1     1            1           13d
default       deployment.apps/ws-manager               1/1     1            1           13d
default       deployment.apps/ws-scheduler             1/1     1            1           13d
default       deployment.apps/content-service          1/1     1            1           13d
default       deployment.apps/theia-server             1/1     1            1           13d
default       deployment.apps/minio                    1/1     1            1           13d
default       deployment.apps/ws-proxy                 1/1     1            1           13d
default       deployment.apps/image-builder            1/1     1            1           13d
default       deployment.apps/proxy                    1/1     1            1           13d
default       deployment.apps/registry                 1/1     1            1           13d
default       deployment.apps/messagebus               1/1     1            1           13d
default       deployment.apps/mysql                    1/1     1            1           13d
default       deployment.apps/ws-manager-bridge        1/1     1            1           13d
default       deployment.apps/server                   1/1     1            1           13d

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/local-path-provisioner-7c458769fb   1         1         1       13d
kube-system   replicaset.apps/coredns-854c77959c                  1         1         1       13d
kube-system   replicaset.apps/metrics-server-86cbb8457f           1         1         1       13d
default       replicaset.apps/blobserve-6bdd97d5dc                1         1         1       13d
default       replicaset.apps/dashboard-859c9bf868                1         1         1       13d
default       replicaset.apps/ws-manager-84486dc88c               1         1         1       13d
default       replicaset.apps/ws-scheduler-ff4d8d9dd              1         1         1       13d
default       replicaset.apps/content-service-656fd85977          1         1         1       13d
default       replicaset.apps/theia-server-568fb48db5             1         1         1       13d
default       replicaset.apps/minio-84fcc5d488                    1         1         1       13d
default       replicaset.apps/ws-proxy-5d5cd8fc64                 1         1         1       13d
default       replicaset.apps/image-builder-7d97c4b4fb            1         1         1       13d
default       replicaset.apps/proxy-85b684df9b                    1         1         1       13d
default       replicaset.apps/registry-65ff9d5744                 1         1         1       13d
default       replicaset.apps/messagebus-f98948794                1         1         1       13d
default       replicaset.apps/mysql-7cbb9c9586                    1         1         1       13d
default       replicaset.apps/ws-manager-bridge-69856554ff        1         1         1       13d
default       replicaset.apps/server-84cf48b766                   1         1         1       13d

A lot of the items will likely read 'Creating' or 'Initializing'. I didn't think ahead enough to grab the output when my instance was actually being setup so the output above is what mine looks like when every thing is done.

If everything went smoothly your output from that command should look like mine from above. You should also be able to visit your Gitpod instance in your web browser.


Assuming you've setup your authentication provider properly you should be able to login and start setting up workspaces!

Gitpod Gitpod

While everything is working there's a couple things I want to figure out how to do to have a "better" instance of Gitpod running.

  • Move the Docker volumes onto my 5.6TB RAID5. Right now they're sititng on my SSD which I prefer to use for my boot drive and mostly static files.
  • Figure out how to set a better password for MySQL.
  • Figure out how to run a newer version of Gitpod. Right now it's using the latest tag which is version 0.7.0.
  • I seem to have issues redeploying the Docker container while leaving the volumes as-is. This causes me to have to start completely fresh which is no good.


For whatever reason I find that seeing gaps in IDs between VMs annoying. I don't like seeing gaps such as:

VM100 - 100
VM101 - 101
VM103 - 103
VM104 - 104

Thankfully there's a bit of a way to adjust the IDs for VMs. I would recommend taking a backup of the VM and it's configuration file(s) beforehand. Once you're ready you can run the following command to display information about your logical volumes.

lvs -a

This should display something similar to:

  LV              VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0   data -wi-ao---- 120.00g                                                    
  vm-101-disk-0   data -wi-ao----  80.00g                                                    
  vm-101-disk-1   data -wi-ao---- 120.00g                                                    
  vm-102-disk-0   data -wi-a----- 120.00g                                                    
  vm-103-disk-0   data -wi-ao----  80.00g                                                    
  vm-104-disk-0   data -wi-ao----  80.00g                                                    
  vm-105-disk-0   data -wi-ao---- 320.00g                                                    
  vm-105-disk-1   data -wi-ao---- 120.00g                                                    
  vm-105-disk-2   data -wi-ao---- 120.00g                                                    
  vm-106-disk-0   data -wi-ao----  80.00g                                                    
  vm-107-disk-0   data -wi-ao---- 120.00g                                                    
  data            pve  twi-a-tz--  59.66g             0.00   1.59                            
  [data_tdata]    pve  Twi-ao----  59.66g                                                    
  [data_tmeta]    pve  ewi-ao----   1.00g                                                    
  [lvol0_pmspare] pve  ewi-------   1.00g                                                    
  root            pve  -wi-ao----  27.75g                                                    
  swap            pve  -wi-ao----   8.00g 

Next, determine which VM you're wanting to change the ID of. In the following commands I will be changing the VM ID of 101 to 100.

This command will update the name of the logical volume:

lvrename data/vm-101-disk-0 vm-100-disk-0

Next we want to update the ID in the VMs configuration file:

sed -i "s/101/100/g" /etc/pve/qemu-server/101.conf

After that we want to rename the VMs configuration file:

mv /etc/pve/qemu-server/101.conf /etc/pve/qemu-server/100.conf

Once those commands have been run you can start the VM up again.

Dependabot is a really neat tool that helps keep your dependencies secure and up to date. It creates pull requests to your Git repositories with the updated dependencies. It works with a wide variety of package managers and languages like NPM/Yarn, Composer, Python, Ruby, Docker, Rust, and Go.

As someone who uses GitHub Enteprise, a little bit of extra work needs to be done in order to self-host Dependabot. After fiddling around with it for a few days, I've finally gotten it working, so I figured it would be worth writing up and sharing with everyone!

My setup consists of a server dedicated to running Docker containers, however any AMD64 system where Docker can run should do the trick. First I cloned the dependabot-script Git repository (I ran this in my /home/jimmy/Developer/ directory - but you can put it wheverver you'd like):

git clone

Next, I pulled the dependabot-core Docker image:

docker pull dependabot/dependabot-core

Once the Docker image has been pulled we need to run it to install some dependencies:

docker run -v "$(pwd):/home/dependabot/dependabot-script" -w /home/dependabot/dependabot-script dependabot/dependabot-core bundle install -j 3 --path vendor

Make sure you're in the cloned dependabot-script directory (/home/jimmy/Developer/ directory for me) when you run that. It shouldn't take very long to run.

Next we need to make a little change to fix an issue which seems to prevent Dependabot from running properly. So let's run this:

docker run -d -v "$(pwd):/home/dependabot/dependabot-script" \
    -w /home/dependabot/dependabot-script \
    dependabot/dependabot-core sleep 300

This will start up Dependabot as a detached container and it'll sleep for 300 seconds before exiting. This should give us enough time to run a couple commands. Once the above command has been run, use the following command to enter into the container:

docker ps |grep dependabot-core # get the id of the container

docker exec -it $containerId bash

You should now be inside your Dependabot container. I was able to find this issue on GitHub which allowed me to fix and run Dependabot without issue. We need to edit the Gemfile - which can be done while inside the container or outside, it's up to you. I initially did it from inside the container, but either works. Since nano wasn't available I had to install that first, I didn't check to see if vi or vim were but if they aren't you can use a similar approach. From within the container I ran:

apt -y update && apt -y install nano
nano Gemfile

I then edited:

gem "dependabot-omnibus", "~> 0.118.8"


gem "dependabot-omnibus", "~> 0.130.2"

Save and exit. Then run:

bundle _1.17.3_ install
bundle _1.17.3_ update

Once that was done, I exited the container and attempted to run Dependabot normally.

docker run --rm -v "$(pwd):/home/dependabot/dependabot-script" \
    -w /home/dependabot/dependabot-script \
    -e PROJECT_PATH=jimmybrancaccio/emil-scripts \
    -e PACKAGE_MANAGER=composer \
    dependabot/dependabot-core bundle exec ruby ./generic-update-script.rb

I recommend going to and setting up a personal access token (I only checked off the repo checkbox - but even that might not be needed). This allows you to make more requests to the API. Without this I ran into API rate-limiting quickly. If you do create a personal access token for replace $GITHUB_ACCESS_TOKEN with your token, otherwise just remove that whole line. Next you'll want to replace $GHE_HOSTNAME with your actual GitHub Enterprise hostname. You can either replace $GITHUB_ENTERPRISE_ACCESS_TOKEN with a personal access token from your GitHub Enterprise of your own account, or what I did was I created a separate account for Dependabot and generated a personal access token for that account. After that you just need to make sure PROJECT_PATH and PACKAGE_MANAGER have proper values.

I wrote a very simple Bash script with essentially a bunch of those Docker run "blocks". Once for each repository that I wanted Dependabot to monitor. I setup a cronjob for the script to run once a day as well. You can set that part of it up as you see fit though.


One of the final pieces of software I still hadn't been able to install on my new MacBook Pro M1 was kubectl also known as kubernetes-cli. Today I came across this issue on GitHub in which someone noted the architecture is just missing from one of the files and adding it in allows it to build properly. Using my limited knowledge of how Homebrew formula work, I was able to get it working.

First edit the formula for kubernetes-cli:

brew edit kubernetes-cli

Then at about line 25 add patch :DATA so it looks like:

  uses_from_macos "rsync" => :build

  patch :DATA

  def install
    # Don't dirty the git tree
    rm_rf ".brew_home"

Then go to the bottom of the file and add:

index bef1d837..154eecfd 100755
--- a/hack/lib/
+++ b/hack/lib/
@@ -49,6 +49,7 @@ readonly KUBE_SUPPORTED_CLIENT_PLATFORMS=(
+  darwin/arm64

Save and exit the file. The full file looks like this:

class KubernetesCli < Formula
  desc "Kubernetes command-line interface"
  homepage ""
  url "",
      tag:      "v1.20.1",
      revision: "c4d752765b3bbac2237bf87cf0b1c2e307844666"
  license "Apache-2.0"
  head ""

  livecheck do
    url :head

  bottle do
    cellar :any_skip_relocation
    sha256 "0b4f08bd1d47cb913d7cd4571e3394c6747dfbad7ff114c5589c8396c1085ecf" => :big_sur
    sha256 "f49639875a924ccbb15b5f36aa2ef48a2ed94ee67f72e7bd6fed22ae1186f977" => :catalina
    sha256 "4a3eaef3932d86024175fd6c53d3664e6674c3c93b1d4ceedd734366cce8e503" => :mojave

  depends_on "go" => :build

  uses_from_macos "rsync" => :build
  patch :DATA
  def install
    # Don't dirty the git tree
    rm_rf ".brew_home"

    # Make binary
    system "make", "WHAT=cmd/kubectl"
    bin.install "_output/bin/kubectl"

    # Install bash completion
    output = Utils.safe_popen_read("#{bin}/kubectl", "completion", "bash")
    (bash_completion/"kubectl").write output

    # Install zsh completion
    output = Utils.safe_popen_read("#{bin}/kubectl", "completion", "zsh")
    (zsh_completion/"_kubectl").write output

    # Install man pages
    # Leave this step for the end as this dirties the git tree
    system "hack/"
    man1.install Dir["docs/man/man1/*.1"]

  test do
    run_output = shell_output("#{bin}/kubectl 2>&1")
    assert_match "kubectl controls the Kubernetes cluster manager.", run_output

    version_output = shell_output("#{bin}/kubectl version --client 2>&1")
    assert_match "GitTreeState:\"clean\"", version_output
    if build.stable?
      assert_match stable.instance_variable_get(:@resource)
index bef1d837..154eecfd 100755
--- a/hack/lib/
+++ b/hack/lib/
@@ -49,6 +49,7 @@ readonly KUBE_SUPPORTED_CLIENT_PLATFORMS=(
+  darwin/arm64

Run this command to install kubernetes-cli:

brew install --build-from-source kubernetes-cli

Once completed you should be able to run the following command to get the version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-dirty", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"dirty", BuildDate:"2021-01-04T16:45:01Z", GoVersion:"go1.16beta1", Compiler:"gc", Platform:"darwin/arm64"}

You may get some further output about there being a connection issue, but that's okay if you haven't setup your Kubernetes configuration file yet.

macOS Big Sur - About This Mac I feel like I just barely updated my MacPro to macOS Catalina, and here I am getting it updated to macOS Big Sur!

Thankfully the process wasn't too bad. Of note, my MacPro was a 4,1 upgraded to 5,1 and I do not have any Bluetooth or WiFi cards.

Pre-Install Notes

  • Make sure you have or have previously run the APFS ROM Patcher.
  • At least 1 32GB+ USB thumbdrive - make sure it's a of decent quality / brand name.
  • A boot screen / boot picker.
  • SIP and authenticated root disabled.
  • Updated nvram boot-args.

It's worth noting I used 2x 16GB USB thumbdrives, but I've noted above to use a 32GB thumbdrive.

Disabling SIP and authenticated root

I figured it would be worth including this information so you don't have to dig through Google results. You'll need to either boot into recovery mode or a USB installer to do this. Either way, open Terminal and run these commands.

crsutil status # If this returns disabled you're good, move on.
csrutil authenticated-root status # If this returns disabled you're good, move on.

If either of the above commands didn't return disabled, then run the following:

crsutil disable
csrutil authenticated-root disable

You can re-run the first 2 commands to ensure the result is 'disabled'.

Update nvram boot-args

While you're also in Terminal run the following:

nvram boot-args="-v -no_compat_check"

Upgrading/Installing macOS Big Sur

Alright so first things first, we need to create a bootable USB installer. You should be able to do this all from your MacPro without needing to use any other systems, but it's possible you may need a secondary Mac.

Let's grab the tool we need to use to patch our macOS Big Sur installer. Visit this page on GitHub, click on the green button labeled 'Code'. Select the 'Download Zip' option and a zip file will download. A side-note, I have a separate adminstrator account on my MacBook Pro so I placed the unziped directory (named bigmac-master) into a directory accessible by all users - in this case I used /Users/Shared. You can put it wherever an administrative user can access it.

Next take your USB thumbdrive and erase it in Disk Utility. You can name it whatever you'd like, just make sure 'Scheme' is set to 'GUID Partition Map'. Once that has finished, you can close out of Disk Utility.

Open Next go into the directory where you've placed the patcher tool. As an example:

cd /Users/Shared/bigmac-master

Now run the following command which will setup your bootable macOS Big Sur installer on your USB thumbdrive:

sudo ./

You'll be asked (verbiage may differ slightly):

📦 Would you like to download Big Sur macOS 11.1 (20C69)? [y]:

Hit y and then Enter. This will download the macOS Big Sur Installer. It's about a 12GB so it may take a bit of time. You'll then be asked:

🍦 Would you like to create a USB Installer, excluding thumb drives [y]:

Don't worry about the 'excluding thumb drives' verbiage, but remember you should be using a thumbdrive of decent quality / brand named. Hit `y' and then Enter. It may take some time but it will do the following:

  • Create 3 partitions on the USB thumbdrive.
    • The first partition will be for a copy of the patcher tools.
    • The second partition will be for the macOS Big Sur installer.
    • The third partition will be free space.

Here's where I messed up as either I am blind or no recommendated size was given for the USB installer device so I figured my 16GB thumbdrive would be fine. It wasn't. I had to edit the script. At line 131 I had to change:

diskutil partitionDisk "$disk" GPT jhfs+ bigmac_"$disk$number" 1g jhfs+ installer_"$disk$number" 16g jhfs+ FreeSpace 0


diskutil partitionDisk "$disk" GPT jhfs+ bigmac_"$disk$number" 1g jhfs+ installer_"$disk$number" 13.5g jhfs+ FreeSpace 0

I had to readjust the partition size that was being used for the macOS installer. Nonetheless once it completes you'll see some further instructions which you'll want to follow.

According to the previously provided output it states that you should reboot the system while holding down the Option key. This gets you into your boot selector. It's worth noting that I had to hold the Esc key to get to my boot selection screen. Whatever key you hold select the 'macOS Big Sur Installer' option. Once it's loaded up open up Terminal from the menu bar. Run the following commands to patch the installer:

cd /Volumes/bigmac

Then close out Terminal so you're back at the window which has an 'Install macOS Big Sur' option. Click on that and go through the process. Since I was upgrading I selected 'Macintosh HD' as my disk. Continue on and it'll start installing/upgrading. During this process the system will reboot three times.

You may end up at the login screen once it has completed. If this is the case, reboot the system again into the boot selector. Select the 'macOS Big Sur Installer' option. Once back in the installer environment, open Terminal and run the following:

cd /Volumes/bigmac

Hopefully everything goes smoothly! However if you happen to see the following (or very similar) at the end of the script run, we'll need to create another USB key (or reuse your current key):

📸 Attempting to delete snapshot =>
diskutil apfs deleteSnapshot disk4s5 -uuid 0FFE862F-86C8-43AE-A1E0-DFFF7A6D7F79
Deleting APFS Snapshot 0FFE873F-86C8-43AE-A1E0-DFFF7A6D7F79 "" from APFS Volume disk4s5
Started APFS operation
Error: -69863: Insufficient privileges

If you see that, you won't be able to boot back into macOS Big Sur. You'll just get an endless stream of kernel panics and reboots. Thankfully there's another tool out there that can resolve this. As I couldn't boot my MacPro I had to swap over to my MacBook Pro. From there I downloaded a copy of this file to my /Users/Shared directory. I then wrote that image to my second USB thumbdrive which I had previously erased and named 'usb' and set the Scheme to GUID Partition Map.

sudo asr -source /Users/Shared/BigSurBaseSystemfix.dmg -erase -noverify -target /Volumes/usb

From there, you'll want to shutdown your MacPro, remove the first USB thumbdrive, insert the second USB thumbddrive and reboot into your boot selection screen. From there pick the 'macOS Big Sur Installer' option. Once booted into the installer, go to the menu bar and select 'Utilities', then select the 'BigSurFixes delete snapshot' option. A Terminal window will popup and you'll be asked a couple questions. To be honest I can't remember what the exact questions are and I can't find them in the tools Git repository, but they should be self explanatory. Once that has completed running. You can reboot the system. It should probably boot back into macOS Big Sur now!


Presently you're unable to install Go via Homebrew on the new M1 Mac systems. While it's expected to be working in the beginning of 2021 I personally couldn't wait that long as there's some tools I use on a daily basis that require Go. Thankfully there's a method you can follow to get Go installed on your M1 Mac for the time being.

First ensure that you have git installed and that you have a copy of the current go package. For the go package I downloaded this file to my Downloads directory. It basically acts as a bootstrap environment so we can build our own version of Go that is native to ARM64. Make sure you uncompress it. It should result in a new directory named 'go'.

Next we need to get a copy of the current Go source. We can do this by running:

mkdir ~/Developer && cd ~/Developer
git clone

Then navigate into the clone repository and checkout the master branch:

cd ~/Developer
git checkout master

Next we need to compile a version of Go which will work on our M1 system. In the following command you'll want to adjust $USERNAME so it's your username.

arch --x86_64 env GOROOT_BOOTSTRAP=/Users/$USERNAME/Downloads/go GODEBUG=asyncpreemptoff=1 GOOS=darwin GOARCH=arm64 ./bootstrap.bash

I've moved the built binaries into my Homebrew installation but this isn't required. Don't forget to update $USERNAME to your username.

cd /opt/homebrew/Cellar && mkdir go && cd go && mkdir 1.15.6 && cd 1.15.6 && mkdir bin && mkdir libexec
cd bin && cp -v /Users/$USERNAME/Developer/*
cd ../libexec && cp -Rv /Users/$USERNAME/Developer/* .
cd /opt/homebrew/bin
ln -s ../Cellar/go/1.15.6/bin/gofmt .
ln -s ../Cellar/go/1.15.6/bin/go .

I have this set in my .zshrc so it allows the binaries to work from Homebrew:

export PATH="/opt/homebrew/bin:/opt/homebrew/sbin:$PATH"

If everything worked, the following command should return the Go version (your output may be a bit different, specifically the commit version):

$ go version
go version devel +e508c1c67b Fri Dec 11 08:18:17 2020 +0000 darwin/arm64

This article was put together using my .zsh_history and memory so there's a chance something may not work 100%. If that's the case please don't hesitate to leave a comment and let me know. I probably should have written this right after I did this myself, oops! 🙄

Apple has released new hardware which utilizes an ARM64 based chip. This means a lot of software provided by Homebrew doesn't work. A couple of these include Rust and Go (which I will cover installing in another post). Thankfully both of these vendors have updated their software to work with the new chip from Apple. The downside is that Homebrew itself isn't even supported in the new M1 environment and it requries a little extra command-line work. I suspect this should be much of an issue for users of Homebrew though! This document assumes you already have Homebrew installed.

First bring up or whatever terminal application you use. I'm using since it's native so I know for sure I am building within and using a M1/ARM64 native application. Of note I personally create 2 accounts on every Mac. The first user is an administrator cleverly named administrator, and then my second account is the account I use day to day and does not have administrator priviledges. So my first command on the command line is:

su administrator

From there I run another command to edit the formula for Rust:

brew edit rust

Around line 37-38 I add depends_on "ninja" => :build, so it's right after the line of depends_on "pkg-config". This was done after reading this comment on GitHub. Now save and exit the file.

Run the following command to build and install Rust:

brew install -s --HEAD rust

It took about 30 minutes to build on my MacBook Pro M1.

% rustc -V    
rustc 1.50.0-nightly (2225ee1b6 2020-12-11)

% cargo -V
cargo 1.50.0


Gitpod is a really neat tool that lets you work with your Git repositories in a web browser-based IDE. Gitpod is offered as a hosted solution or you can self host it. Of course self-hosting is the way to go! Unfortunately it's not as easy (at least right now) as most self-hosted apps to setup but this guide aims to walk you through getting a Gitpod instance setup for yourself.

This guide assumes you already have a Kubernetes cluster setup. I personally setup a cluster using k3s. I setup my custer with one master node (4 CPU cores, 4GBs of RAM and 40GBs of disk space) and 4 worker nodes (each with 8 CPU cores, 16GBs of RAM and 250GBs of disk space). This guide also assumes you're using an external MySQL database, external Docker registry and an external MinIO installation. I should also note that I am using GitHub Enterprise but this should work with and GitLab.

As someone who likes to keep things organized the first thing I did was create a project via Rancher called Gitpod. I also created a namespace, gitpod. I run the following command from my workstation where I've setup kubectl with my Kubernetes cluster configuration.

kubectl create namespace gitpod

You should get the following output:

namespace/gitpod created

Rancher Projects

I then added that namespace to the Gitpod project. Next we need to clone the Gitpod repository to our location workstation. You can put the repository wherever you'd like. I have mine in /home/jimmy/Developer/

git clone

I use VS Code myself on my workstation, but use whatever you're most comfortable with. Open the new 'gitpod' folder in your editor. We need to setup our install!

Open the file charts/values.yaml. I recommend replacing the content of this file with this as this is what was recommended to me. Once replaced, save the file. Now we can start adjusting it and filling in our own information.

On line 4, change it to version: 0.5.0. Next adjust line 5 (hostname: localhost) to your domain name. This would be what you use in your web browser to access your instance of Gitpod.

version: 0.5.0

We need to change the imagePrefix value as we're setting up a self-hosted installation. Adjust it as follows:


On line 5 (workspaceSizing), you can adjust your workspace settings. The only thing I adjusted was the limits, I set my memory limit to 4Gi. You can set this to whatever you feel comfortable with.

    cpu: "1m"
    memory: "2.25Gi"
    storage: "5Gi"
    cpu: "5"
    memory: "4Gi"

Next on line 51 (db) you'll want to fill in your database information. You can use a hostname or IP address here for host.

  port: 3306
  password: password1234

Next open the secrets/encryption-key.json file and create yourself a new key. I am not sure if this is required but I figured it would be better to set something rather than what is in there just in case. I used this website to generate a string.

Next configure the authProviders block. I am not sure if you can have both GitHub and GitLab at the same time, or have both GitHub and a GitHub Enterprise configurations, you're more than welcome to try it out. However I have GitHub Enterprise so I create an OAuth app and filled out the details. It looks something like this:

  - id: "GitHub-Enterprise"
  host: ""
  type: "GitHub"
    clientId: "6g5a657e145y51abc2ff"
    clientSecret: "9819537b4694ee6a46312t2dalw17345f8d5hgt"
    callBackUrl: ""
    settingsUrl: ""
  description: "GitHub Enterprise"
  icon: ""

In the branding block I updated each instance of to my domain. Feel free to do the same but it's not required as far as I know.

I updated the serverProxyApiKey with a new string for the same reason as I updated the one in the secrets/encryption-key.json file.

Next we'll update some of the settings in the components section. First up is imageBuilder. Since we have our own registry we need to update the registry block to reflect that. Here's what mine looks like:

  name: "image-builder"
    - "image-builder-configmap.yaml"
    hostDindData: /var/gitpod/docker
    registryCerts: []
      secretName: image-builder-registry-secret
      path: ""
      baseImageName: ""
      workspaceImageName: ""
      # By default, the builtin registry is accessed through the proxy.
      # If bypassProxy is true, the builtin registry is accessed via <registry-name>.<namespace>.svc.cluster.local directly.
      bypassProxy: false
    dindImage: docker:18.06-dind
        cpu: 100m
        memory: 128Mi
        expose: true
        containerPort: 8080
        expose: false
        containerPort: 9500

Under workspace make sure to set the secretName of pullSecret to image-builder-registry-secret:

  secretName: image-builder-registry-secret

Next under wsSync you can setup the remoteStorage details however it may be some what pointless due to a bug in one of the templates. I'll show you how to get MinIO working after we've deployed the Helm chart. I did fill out the information so once the bug is resolved I already have the settings filled out.

Scroll down to the bottom of the page, you should see sections for docker-registry, minio and mysql. Edit them or replace them so it looks like this:

  enabled: false

  enabled: false

  enabled: false

Now save your values.yaml file. Next we need to create a secret for your Docker registry.

kubectl create secret docker-registry image-builder-registry-secret --docker-server=registry.mydomain --docker-username=$USERNAME --docker-password=$PASSWORD -n gitpod

Make sure to put the URL of your registry for --docker-server and replace $USERNAME and $PASSWORD with your username and password. Once that is done you should see it on the Registry Credentials tab of the Secrets page within Rancher.

Rancher - Registry Credentials

This next step I am not sure if it's necessary, but I found that if I didn't do it, I had issues. So log into your MySQL server and run these queries:

GRANT ALL ON `gitpod%`.* TO "gitpod"@"%";

CREATE DATABASE IF NOT EXISTS `gitpod-sessions` CHARSET utf8mb4;
USE `gitpod-sessions`;

   `session_id` varchar(128) COLLATE utf8mb4_bin NOT NULL,
   `expires` int(11) unsigned NOT NULL,
   `data` text COLLATE utf8mb4_bin,
   PRIMARY KEY (`session_id`)


This creates a MySQL user, 'gitpod' (don't forget to update $PASSWORD in the query with your own password), the gitpod-sessions database with a sessions table inside of it and the gitpod database.

Next we need to create 2 repositories (workspace-images and base-images) within our Docker registry. The only way I could figure out how to do this was to push an image to the registry. I just used something small though I plan on deleting it later so I suppose that doesn't matter. I did using these commands:

docker push
docker push

Now you should be all set to deloy! First lets add the Gitpod Helm charts repository:

helm repo add gitpod
helm dep update

Next lets install Gitpod!

helm upgrade --install gitpod gitpod/gitpod --timeout 60m --values values.yaml -n gitpod

You should see something like this:

Release "gitpod" does not exist. Installing it now.
NAME: gitpod
LAST DEPLOYED: Thu Dec  3 10:43:45 2020
STATUS: deployed

You can watch each of the workloads come up in Rancher if you'd like. Hopefully everything is green!

Rancher - Gitpod Workloads

Now we've got to do a little fixing of certain things due to bugs with Gitpod. First if you have a multi-worker node cluster we need to fix ws-sync. You can do this however you'd like but I find doing it from within Rancher the easiest. In the row for ws-sync click on the little blue button with 3 dots and click on 'View/Edit YML'. Around line 350 or so we need to change the dnsPolicy and add hostNetwork. Adjust it so it reads as:

      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true

Save it and this will automatically trigger the workload to redeploy. This will help prevent getting the following when trying to load a workspace:

 cannot initialize workspace: cannot connect to ws-sync: cannot connect to ws-sync: cannot connect to workspace sync; last backup failed: cannot connect to ws-sync: cannot connect to workspace sync.

Next we need to fix the MinIO settings in the server workload. Similar to how we edited the YAML for ws-sync we need to do the same for server. Click the little blue button with 3 dots and click on 'View/Edit YAML'. Locate the following which should just have defaults in the value lines:

        - name: MINIO_END_POINT
          value: minio.minio.svc.cluster.local
        - name: MINIO_PORT
          value: "9000"
        - name: MINIO_ACCESS_KEY
          value: accesskey
        - name: MINIO_SECRET_KEY
          value: secretkey

You may only need to update the values for MINIO_ACCESS_KEY and MINIO_SECRET_KEY. I believe I needed to update the value for MINIO_END_POINT as well as it seemed to have the port tacked onto the end which should be removed. Once everything looks good, hit Save and the server workload will redeploy.

At this point you should be all set. Visit which should redirect you to You can login from there. Once you do it's just a matter of creating new workspaces. This can be done by constructing a URL like If all went well you should see a code editor in your web browser with your Git repository contents!

Other Notes

  • At the time of writing (December 3, 2020) there still appears to be an issue with uploading extensions. I have a thread on the Gitpod community forums for this. Uploading extensions has actually never worked for me in all the time I've been using Gitpod which appears to have been since June of this year.
  • There appears to be an issue with installing extensions from search results. I just noticed this today after someone else posted about it in the Gitpod community forums.
  • I have my Kubernetes cluster sitting behind Traefik which provides Gitpod with SSL certs.


MacBook Pro M1

As someone who is addicted to playing with new technology it's no surprise I picked up a new MacBook Pro M1 last Friday. I wanted to give it some time before I wrote about it so I figured ~1 week was enough time to form a decent opinion.

The magic started as soon as I took it out of the box and lifted the lid. It automatically booted up! That was really cool and I wish I could understand how it did that.

I had been pondering getting the new laptop with the new M1 chip, 8GBs of RAM and a 512GB SSD for a few days. My current MacBook Pro has 16GBs of RAM and struggles to keep up with me from time to time. However in the end, I figured I had some time to return the new laptop should I have any issues and re-order a new one with 16GBs of RAM. This was my biggest concern in getting the new MacBook Pro. However after one week of pushing it with all my work, it hasn't skipped a beat. It has handled everything I've thrown at it with no problems at all. I have Safari running with ~25 tabs, Hyper.js, Screen Sharing, Mail, Things, multiple VS Code workspaces, Messages, Things, Discord, Mastonaut, 1Password, Terminal, and Transmit open. I've been compiling things in Homebrew, running PHP unit tests, and it's been perfectly fine. No slow downs, no spinning rainbow wheel. The only time when I ran into an issue was when I managed to get some errant PHP processes. There was about 10 of them running using a ton of CPU usage and causing slow down for me. This only happened once though.

The new SoC design seems to really work well and meets and probably exceeds my expectations. It definitely allows things to run and access resources very quickly!

Another concern of mine was the keyboard. The reason why I am still using a MacBook Pro from early 2015 is because I love the keyboard. Since the recent redesigns that have utilized the butterfly mechanism I couldn't stand the keyboards. They were absolutely horrific. However the new resdesign which appears to be in the iPad Pro keyboard and the newer laptops is MUCH better. I had ended up going to Best Buy around 2 weeks ago and tried out the keyboard on one of the newer Intel MacBook Pros and was much more impressed. As much as I am not a huge fan of Best Buy they're still open which allowed me the opportunity to try out the newer keyboard. Apple stores are still closed here - you can only make appointments for picking up items, you can't go into the stores as before to try things out unfortunately (DAMN COVID). I've been typing away constantly on my new MacBook Pros keyboard, I love it. It's smooth and dare I say, soft! There's travel between pressing the key and it bottoming out. It feels much more comfortable and responsive then the previous design.

Similar to Apple's transistion to Intel this transistion also provides a "compatibility layer" Rosetta 2, which allows you to run x86 applications on the new M1 chip (ARM64). So far I haven't had a single issue running any of my applications. There's probably 2-3 apps I use currently that use Rosetta 2. I haven't noticed any slow down within the apps, and in fact they seem to start up with the same speed as native apps. I believe the only non-native apps I use on a day to day are Hyper.js, 1Password and VS Code. I do know that 1Password and VS Code are working on building native apps, though I am not sure about Hyper.js. I would think they should be though, and I can't imagine it would be difficult to update. I believe it runs on Electron.js so they just need to swap in an ARM64 build of that and perhaps a few other tweaks.

I'm not sure how much more I have to say about this laptop. It's an excellent upgrade from my early 2015 MacBook Pro (which I still have to use for work 😭). Even with 8GBs of RAM, I've had no issues! It's light in weight (so I find myself taking it everywhere), the keyboard is pleasent to use and all applications I use have had no issues thus far. The only real trouble I've come across is with certain applications or libraries failing to install via Homebrew. This is generally due to them not being compatible with macOS Big Sur or ARM64 yet. The Homebrew team has been working hard to ensure compatibility with both though. Overall, I love this new laptop!

As I don't have Composer installed directly on my server with my many Docker containers of my websites, and I don't run composer update in the Docker image(s) for my websites I was able to use the Composer Docker image to update packages by running the Composer image within the directory of my website(s). It worked something like this:

cd /home/jimmy/public_html/
docker run --rm --interactive --tty --volume $PWD:/app composer update

This will mount the directory you're presently in into the Composer image and then will run composer update command. The result is updated packages!

There was one website I have which has a Composer package that required bcmath, which of course I didn't have installed as well as wasn't available in the Docker image, so I was able to get around this doing this instead:

cd /home/jimmy/public_html/
docker run --rm --interactive --tty --volume $PWD:/app composer update --ignore-platform-reqs

Hopefully this helps someone else out!

This past weekend I decided to move some VMs from one Proxmox server to another. Thankfully the process was very easy and could be done in under 10 commands! I utilized a 1TB external USB to store my backed up VMs on on my source system.

Let's get started! Make sure the source server can reach the destination server via SSH. First move into the directory where you want to put your backed up VMs in. For me this was /mnt/storage. Then start taking backups of your VM(s).

vzdump 100

The number 100 in the above example is the ID of the VM. Once the backup has been completed we'll want to copy it over to the desintation server.

scp vzdump-qemu-100-2020_11_00-00_14_30.vma root@

You can adjust the path to where you're sending it on the destination server. I used another 1TB USB drive on my destination server as well. Once the transfer is complete we need to restore it! We run this on the destination server:

cd /mnt/storage2
qmrestore vzdump-qemu-100-2020_11_00-00_14_30.vma 110

First make sure you go into the directory where you transferred the backup too. Next, the last number in the 2nd command is going to be the new ID of the VM. Since I already had some VMs on my destination server I just picked the next ID.

Of note, depending on the size of the backups it can take some time to backup, transfer between source and destination as well as restore. However, I didn't hit any snags and everything went smoothly!

About a month ago someone posted a link to their blog article on r/self-hosted about setting up your own self-hosted Kubernetes GitHub Runners. Around this time I had just gotten my GitHub Enterprise instance working with actions and such so I was quite excited to see this.

Originally I had attempted to install a self-hosted GitHub runner on one of my servers, but because I was missing node it didn't run properly. I then came across the source which GitHub provides on setting up their runners which they deploy to users of However these are full on Ubuntu environments with everything you could think of installed within in. If I recall they were about 80-90GBs in size. Nonetheless I ended up setting up a couple of them as VMs. I quickly realized maintaining and keeping them updated would be another task I really didn't have the time for. This method didn't really make sense for me especially since most of the stuff I was doing with GitHub Actions was being performed in Docker.

Thankfully this kind fellow put together this guide which walks you through setting up GitHub runners in a Kubernetes environment. I'm a completely newb to Kubernetes so this was an excellent opportunity for me to learn some more! While I follow most of the guide there were a couple things I did differently. In this article I'll go from nothing to running runners in your Kubernetes cluster!

I opted to go with k3s because it's something I am familiar with setting up and using. It's really easy to install and setup! I first setup 3 Ubuntu 20.04 VMs on my Proxmox server. I allocated 2 cores, 40GBs of disk space and 4GBs of RAM to what would be my master node. My other 2 nodes consisted of 8 cores, 16GBs of RAM and 250GBs of disk space each. This may be overkills, but I had the resources to spare on the system. Make sure you disable swap on your systems. I did this by editing the /etc/fstab file and commenting out the line for swap.

Once each VM was setup, I made to run apt update && apt upgrade on each one to ensure everything was as up to date as possible. I also like to use dpkg-reconfigure tzdata to set the timezone for each VM to my timezone.

Next get Docker installed on your master and worker nodes.

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] focal stable"
sudo apt update
sudo apt install docker-ce
sudo systemctl status docker
sudo usermod -aG docker $LINUX_USERNAME

I personally use PostgreSQL with k3s. You can choose to use whatever option you'd like. There's a few to pick from. Here's a couple quick commands I used to setup my PostgreSQL user and database:


Next install k3s on your master node:

curl -sfL | sh -s - --datastore-endpoint 'postgres://$USERNAME:$PASSWORD@ip.add.ress:5432/k3s?sslmode=disable' --write-kubeconfig-mode 644 --docker --disable traefik --disable servicelb

This will install k3s in master node mode, uses Docker instead of containerd, and disables Traefik and the service load balancer.

Grab your token which will be needed to set up the worker nodes. You can find the token at /var/lib/rancher/k3s/server/node-token.

On your work nodes, get k3s installed in agent mode by using these commands:

export K3S_URL=https://master-node-ip-address-or-url:6443
export K3S_TOKEN=K1009809sad1cf2317376e1fc892a7f48983939442479i987sa89ds::server:e28d3875948350349283927498324
curl -fsL | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -s agent --docker --disable traefik --disable servicelb

This will set your master node URL and token to a variable and then utilize those variables to install k3s. You'll notice I specify -s agent which tells the install to install k3s in agent mode. Again I disable Traefik and the service load balancer. Given that the GitHub runners don't need to get incoming traffic I found that have Traefik and the service load balancer unnecessary.

If everything went well, you can run kubectl get nodes from your master node and it should show your 3 nodes:

jimmy@kubemaster-runners-octocat-ninja:~$ kubectl get nodes
NAME                               STATUS   ROLES    AGE     VERSION
kubemaster-runners-octocat-ninja   Ready    master   6d17h   v1.18.9+k3s1
kubenode1-runners-octocat-ninja    Ready    <none>   6d17h   v1.18.9+k3s1
kubenode2-runners-octocat-ninja    Ready    <none>   6d16h   v1.18.9+k3s1

I also like to run this command to ensure that no jobs are scheduled on my master node, it's not required though:

kubectl taint node $masterNode k3s-controlplane=true:NoSchedule

This part is also not required but I'm a Kubernetes newbie so having a GUI is helpful. First install helm3:

curl -O
bash ./get-helm-3 

You can confirm your helm version by using helm version. Next we need to add the Rancher charts repository:

helm repo add rancher-stable

This will install the stable version of the charts, but you can do latest as well. Next create a namespace for Rancher:

kubectl create namespace cattle-system

Next we'll install Rancher using this command:

helm install rancher rancher-latest/rancher \
    --namespace cattle-system \
    --set \
    --set tls=external

You can use kubectl -n cattle-system rollout status deploy/rancher to keep an eye on the deployment. I think it took ~2 minutes, though probably less for it to install for me. Once that is done, I assigned an external IP to the rancher service:

kubectl patch svc rancher -p '{"spec":{"externalIPs":[""]}}' -n cattle-system

Now you'll obviously want to make sure whatever IP you assign is routed to the system. Next if you have a domain pointing to the system you can use that to access Rancher or you can use the IP. Once you're in Rancher, I recommend creating a new project, I made one called 'GitHub Runners'. Next create a new namespace called docker-in-docker. You can do this from the command line or from within Rancher.

kubectl create ns docker-in-docker

If you did it on the command line, you can use Rancher to move the new namespace into your Project. Here's what my project looks like (don't worry about the other namespace for now):

Rancher - GitHub Runners Project

Next we're going to create a PersistentVolumeClaim. This can be done on the command line or in Rancher. I opted to go the Rancher route since it was easier. From the Projects/Namespaces page, click on the title of the project:

Rancher - GitHub Runners - Click Project Title

From this page click on the 'Import YAML' button:

Rancher - GitHub Runners - Click Import YAML

Paste in the following:

apiVersion: v1
kind: PersistentVolumeClaim
  name: dind
  namespace: docker-in-docker
  - ReadWriteOnce
      storage: 50Gi

Make sure you've selected the 'Namespace: Import all resources into a specific namespace' radio button, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

Rancher - Github Runners - Import YAML

You can adjust the storage size to whatever you feel comfortable with. As given though it will allow 50Gi of space for your Docker in Docker pod. You can always enter into the container to clear out unused Docker images and such.

Hit the Import button! On the 'Volumes' tab you should now see your volume!

Rancher - GitHub Runners - Volumes

Next we'll create a deployment for Docker in Docker. Again, I used the 'Import YAML' button for this. Make sure you have the 'Namespace: Import all resources into a specific namespace' radio button checked, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

apiVersion: apps/v1
kind: Deployment
  name: dind
  namespace: docker-in-docker
  replicas: 1
      workload: deployment-docker-in-docker-dind
        workload: deployment-docker-in-docker-dind
      - command:
        - dockerd
        - --host=unix:///var/run/docker.sock
        - --host=tcp://
        - name: DOCKER_TLS_CERTDIR
        image: docker:19.03.12-dind
        imagePullPolicy: IfNotPresent
        name: dind
        resources: {}
          privileged: true
          readOnlyRootFilesystem: false
        stdin: true
        tty: true
        - mountPath: /var/lib/docker
          name: dind-storage
      - name: dind-storage
          claimName: dind

In a nutshell this will setup a pod with a container that runs the Docker in Docker image. It tells the dockerd daemon inside the container where to put the socket file and to listen on TCP on port 2376. Also by specifying DOCKER_TLS_CERTDIR as an empty environment variable we tell it not to use TLS. I along with the author from the blog article have not specified any resources. As this server pretty much only handles my GitHub Runners and one other small Kubernetes cluster I didn't feel the need to constrain my pods. You're more than welcome to set up resources, but it's not something I cover here. At the bottom of the above YAML you'll notice I also specify my persistent volume claim I previously made. This allows this deployment to utilize that volume. Hit Import and you should see your deployment show up in the Rancher interface!

Rancher - GitHub Runners - DIND Deployments

Next I build a Docker image which contained the GitHub Runner application itself. You can use the original blog authors Docker image, or you can build one yourself and deploy it to your own private registry or Docker Hub. My Dockerfile is as follows:

FROM debian:buster-slim


RUN apt-get update \
    && apt-get install -y \
        curl \
        sudo \
        git \
        jq \
        iputils-ping \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* \
    && useradd -m github \
    && usermod -aG sudo github \
    && echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers \
    && curl --output docker-19.03.9.tgz \
    && tar xvfz docker-19.03.9.tgz \
    && cp docker/* /usr/bin/

USER github
WORKDIR /home/github

RUN GITHUB_RUNNER_VERSION=$(curl --silent "" | jq -r '.tag_name[1:]') \
    && curl -Ls${GITHUB_RUNNER_VERSION}/actions-runner-linux-x64-${GITHUB_RUNNER_VERSION}.tar.gz | tar xz \
    && sudo ./bin/

COPY --chown=github:github ./
RUN sudo chmod u+x ./

ENTRYPOINT ["/home/github/"]

A couple things to note here. I also install Docker since we'll be using this to build and publish our own Docker images via GitHub actions. Also note that this should automatically fetch the latest version of the GitHub Runners and use them. I believe the runner daemon itself checks for updates every few days. I had to modify my slightly from the default since I am using GitHub Enterprise. Once my image was built, I pushed it to my private registry server.

Next we'll create a new namespace for our runners. This can be done on the command line via:

kubectl create ns github-actions

Again, I recommend putting this new namespace in your GitHub Runners project in Rancher. Organization is awesome! Once you've done that we'll need to create a new deployment for the runner(s)! I again utilized Rancher and the wonderful 'Import YAML' button to do this. This time however, make sure under the 'Namespace' dropdown menu that you select the 'github-actions' option. Make sure you set the right Docker image as well (image: repository/github-actions-runner:latest is just a place-holder below)!

apiVersion: apps/v1
kind: Deployment
  name: github-runner
  namespace: github-actions
    app: github-runner
  replicas: 1
      app: github-runner
        app: github-runner
      - name: github-runner
        image: repository/github-actions-runner:latest
        - name: DOCKER_HOST
          value: tcp://dind.docker-in-docker:2376
        - name: GITHUB_OWNER
          value: $GITHUB_USERNAME
        - name: GITHUB_REPOSITORY
          value: $GITHUB_REPOSITORY_NAME
        - name: GITHUB_PAT
              name: github-actions-token
              key: pat

Replace $GITHUB_USERNAME and $GITHUB_REPOSITORY_NAME with your information.

Create a Personal Access Token for yourself within GitHub. This option can be found at Settings > Developer Settings > Personal Access Tokens. I just checked off 'Repo' (which also select it's sub-options too), then click Generate Token.

GitHub - Personal Access Token

You'll get a string of characters which is your token. Copy this, and we'll use it to create a secret within Kubernetes. You can use the Rancher UI to do this, with our favorite 'Import YAML' button! Make sure the 'github-actions' namespace is selected!

apiVersion: v1
kind: Secret
  name: github-actions-token
  namespace: github-actions
type: Opaque

Once you're done your new deployment should show up in the 'github-actions' namespace area!

Rancher - GitHub Runners - Project

The runner should also automatically show up under your repositories settings > action page!

GitHub > Settings > Actions

I've setup 4-5 runners for the time being, but I know I will have a lot more for my other projects! One thing I do wish was that runners weren't repository specific, or that they could just be deployed whenever an Action called for them. It's seems kind of silly to have to have at least one dedicated runner per repository. You'd think a runner could handle many repositories. For the time being though, this is an excellent solution for self-hosters who use GitHub Actions!


Upgrading My 2009 MacPro 5,1 to macOS Catalina

2009 MacPro with macOS Catalina

The other day I finally had a chance to look back into updating my 2009 MacPro to macOS Catalina. When I had done some research previously it appeared that it wouldn't be possible. To my excitement it seems they have figured out how to get it working though!

I would highly recommend this guide if you're looking to get your 2009 MacPro running macOS Catalina.

Of note, I had an older BootROM firmware ( so I did have to get my MacPro updated to before I could proceed. Thankfully it was super easy. I just had to snag the macOS Mojave installer which allowed me to update my system. You simply download it and open it to which it should advise you that a firmware update is needed.

Once I had my firmware updated, I went back to the OpenCore on the Mac Pro guide. One thing that frustrated me a little bit was that this required two disks which meant I would be starting fresh which I really didn't want to. It also meant that I would be moving to a spinning disk as I didn't have any spare SSDs. However, in Part I, Step 4 of the guide, instead of selecting the blank drive, I selected my SSD instead. I figured it would give me an error if it wouldn't work there. Thankfully, no errors popped up or warnings that said macOS Catalina couldn't be installed in the selected location. About 15-20 minutes later it finished and proceeded to reboot off my SSD where macOS Mojave had been installed and where all my files and stuff was to macOS Catalina!

Once that was done I went back to Steps 2 and 3 and performed those actions on my SSD. This effectively installed OpenCore to my SSD instead and reset my SSD as the main boot device. I stupidly followed the 'Toggle the VMM flag' instructions on Step 5 which I shouldn't have done until after I updated at 10.15.7 (it looks like the base install of macOS Catalina started me off at 10.15.6), so I did have to go back and untoggle the VMM flag again.

Under Part II of the guide, I did not do 'Making External Drives Internal', or 'Enabling the Graphical Boot Picker' steps. I figured they weren't that important (for now).

One other side note is that my CPU shows up as an Intel Core i3 in the About This Mac window. This may be due the 'Hybridization' step in the guide, but I am not 100% positive. I believe at one point my CPU did show properly in macOS Catalina. It's not a big issue, just cosmetic.

I'm quite excited that I have the latest version of macOS running on my 2009 MacPro, it's brought and expanded more life into the system. I am planning a CPU upgrade (Intel Xeon X5690) as well as I'd like to get 64GBs of RAM in it. Perhaps at that point it could even become my daily driver!

I'll be back with another post once I get some new hardware!


I wanted to quick share this with everyone. I found it worked over using Disk Utility to restore a .dmg disk image to a USB thumb drive.

sudo /usr/sbin/asr --noverify --erase --source source --target target

Or for example:

sudo /usr/sbin/asr --noverify --erase --source /Users/Shared/yosemite.dmg --target /Volumes/Untitled

You might also find this useful to ensure it's a bootable drive as well:

sudo bless --mount /Volumes/TheVolume --setBoot