For my new Gitpod instance I used 4 VMs. The first VM is going to be the master/controller node and the other three will be my worker nodes.

I created my controller node with 2 CPU cores, 4GBs of RAM and 80GBs of disk space. I used Ubuntu Server 20.04.3 for the operating system. During the install process I setup my partitions as follows:

/boot - 1G
/ - remaining space

Once the install of Ubuntu completed I started setting the system up as follows:

Setting Up the Kubernetes Controller

  1. For whatever reason Ubuntu doesn't allow '.' in the hostname when going through the installation process so the first step is to fix the hostname in /etc/hostname. I set mine to kubemaster.mydomain.com. Then run hostname -F /etc/hostname.

  2. This step is optional. My Kubernetes cluster controller and work nodes all have private IP addresses. I've allocated the IP address block 192.168.102.1 - 192.168.102.253 to the cluster. This is not required if your systems have public IPs and resolvable hostnames. I did this so each of the systems in my cluster know how to properly communicate with each other. My hosts file looked like this:

    127.0.1.1 kubemaster.mydomain.com kubemaster
    192.168.103.1 kubemaster.mydomain.com
    192.168.103.2 kubenode1.mydomain.com
    192.168.103.3 kubenode2.mydomain.com
    192.168.103.4 kubenode3.mydomain.com
  3. Next set the timezone of the system to use your timezone ( in my case it's America/Chicago) - dpkg-reconfigure tzdata.

  4. Run apt -y update && apt -y upgrade to update the software on the system.

  5. Now we'll install MariaDB 10.5. This is used to store some information from k3s.

    apt -y install curl software-properties-common dirmngr
    apt-key adv --fetch-keys 'https://mariadb.org/mariadb_release_signing_key.asc'
    add-apt-repository 'deb [arch=amd64,arm64,ppc64el,s390x] https://mirror.rackspace.com/mariadb/repo/10.5/ubuntu focal main'
    apt -y install mariadb-server mariadb-client
    systemctl status mariadb
    systemctl enable mariadb
    mysql_secure_installation
  6. Create the database for k3s. You'll want to login using the root MySQL password you set when you ran mysql_secure_installation in the previous step.

    mysql -u root -p
    CREATE DATABASE k3s CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
    CREATE USER 'k3s'@'localhost' IDENTIFIED BY '$PASSWORD';
    GRANT ALL PRIVILEGES ON k3s.* TO 'k3s'@'localhost';
    FLUSH PRIVILEGES;
    quit

    Make sure to actually change out $PASSWORD with a secure password.

  7. I install k3s:

    curl -sfL https://get.k3s.io | sh -s - --datastore-endpoint="mysql://k3s:$PASSWORD@tcp(localhost:3306)/k3s" --write-kubeconfig-mode 644 --disable traefik --disable servicelb

    Make sure you swap out $PASSWORD with the password you set for the k3s MySQL user in the previous step.

  8. Edit /etc/fstab and comment out the line for swap.

    # /swap.img none swap sw 0 0
  9. Reboot the server (reboot). Once rebooted, log back into the system and run free -mh to make sure it shows swap like so:

    Swap:            0B          0B          0B

    If it does, then remove the /swap.img file.

  10. Next we need to grab the k3s node-token and save it somewhere safe. It'll be a long string of characters and numbers.

    cat /var/lib/rancher/k3s/server/node-token
  11. I also have another system that I connect to my cluster from. If you do too grab the configuration file from the cluster:

    cat /etc/rancher/k3s/k3s.yaml

    Take the contents there and place it into the /home/user/.kube/config file if you're on Linux or /Users/user/.kube/config file if on macOS. You may need to create the .kube directory first. You'll need to edit the file to point to the IP address of your Kubernetes controller. You should find a line inside the config file that looks like:

    server: https://127.0.0.1:6443

    You'll want to change 127.0.0.1 to the IP or hostname of your controller. Save and exit the file.

  12. Run kubectl get nodes. This should return something like:

     NAME                        STATUS   ROLES                  AGE     VERSION
     kubemaster.mydomain.com    Ready    control-plane,master   4d23h   v1.21.7+k3s1

Setting Up the Kubernetes Worker Nodes

The next three VMs are going to be our worker nodes where pods can be spun up and such. I created each of these with 8 CPU cores, 8GBs of RAM and 250GBs of disk space. I've used Ubuntu Server 20.04.3 on these as well. During the install process I setup my partitions as follows:

/boot - 1G
/ - remaining space

Once the operating system has finished installing we can go through the following steps to finish setting up the system.

  1. First I fix the hostname in /etc/hostname. I set mine to kubenodeX.mydomain.com where 'X' is a number. So for example, on my first worker node, the hostname is kubenode1.mydomain.com, the second worker is kubenode2.mydomain.com and so on. Once that file has been edited and saved I run hostname -F /etc/hostname to set the hostname.

  2. This step is optional. My Kubernetes cluster controller and work nodes all have private IP addresses. I've allocated the IP address block 192.168.102.1 - 192.168.102.253 to the cluster. This is not required if your systems have public IPs and resolvable hostnames. I did this so each of the systems in my cluster know how to properly communicate with each other. My hosts file looked like this:

    127.0.1.1 kubemaster.mydomain.com kubemaster
    192.168.103.1 kubemaster.mydomain.com
    192.168.103.2 kubenode1.mydomain.com
    192.168.103.3 kubenode2.mydomain.com
    192.168.103.4 kubenode3.mydomain.com
  3. Next set the system time to use your timezone (in my case it's America/Chicago) - dpkg-reconfigure tzdata.

  4. Run apt -y update && apt -y upgrade to update all the packages on the system.

  5. Next we're going to install k3s in agent mode:

    export K3S_URL=https://kubemaster.mydomain.com:6443
    export K3S_TOKEN=K104887s5p9182394ydc31c4988f6761844fe71e54ee93f6f64a76dsa87df800c86::server:39aef067sa87d8as9d6d7fb981db4
    curl -fsL https://get.k3s.io | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -s agent

    Make sure to set K3S_URL properly. You can use the hostname or IP address of your controller node here. Make sure you also set the K3S_TOKEN with the node-token you got from your controller node.

  6. Edit /etc/fstab and comment out the line for swap.

    # /swap.img none swap sw 0 0
  7. Reboot the server (reboot). Once rebooted, log back into the system and run free -mh to make sure it shows swap like so:

    Swap:    0B    0B    0B

    If it does, then remove the /swap.img file.

You'll repeat these 7 steps for each of your Kubernetes worker nodes.

Installing Rancher

This section is 100% optional. I like to have Rancher as it provides me a visual overview of my Kubernetes cluster as well as I can manage the cluster in Rancher as well. I should note that I have Traefik running which handles fetching and managing the TLS certificate to my Rancher instance.

  1. From the system which has access to your Kubernetes cluster via kubectl, create a new namespace in your cluster. This is a requirement of Rancher.

    kubectl create namespace cattle-system
  2. Next we'll need to install the Helm chart for the latest version(s) of Rancher:

    helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
  3. As noted before I already have Traefik setup and handling TLS certificates so this is the command I use to deploy Rancher to my Kubernetes cluster:

    helm install rancher rancher-latest/rancher \
        --namespace cattle-system \
        --set hostname=rancher.mydomain.com \
        --set tls=external

    This will deploy the latest version of Rancher to your Kubernetes cluster. You can run the following command to check the progress of the deployment:

    kubectl -n cattle-system rollout status deploy/rancher
  4. Once the deployment is complete you will likely need to assign an external IP to the Rancher service. Make sure you assign an IP to the service where Rancher is actually deployed. You can use a command like:

    kubectl patch svc rancher -p '{"spec":{"externalIPs":["192.168.103.10"]}}' -n cattle-system
  5. Now you should be able to visit https://rancher.yourdomain.com.

Installing Gitpod

Yay, we made it this far! Now we can get to installing Gitpod! For the following you need to be on a system that has access to your Kubernetes cluster (via kubectl), and that has Docker installed.

  1. First we're going to add labels to our Kubenetes worker nodes. These are required for Gitpod. The following command may be different for you, but you'll want to give the hostnames of all of your worker nodes in the for i in bit. For me this looked like:

    for i in kubenode1.mydomain.com kubenode2.mydomain.com kubenode3.mydomain.com ; do kubectl label node $i gitpod.io/workload_meta=true gitpod.io/workload_ide=true gitpod.io/workload_workspace_services=true gitpod.io/workload_workspace_regular=true gitpod.io/workload_workspace_headless=true ; done

    This will add all the necessary labels to your worker nodes in a single command.

  2. Next we need to build the pre-build installer. First visit werft and pick a build you want to run. For example gitpod-build-main.2071 - main.2071. We'll use the following commands to build the installer:

    docker create -ti --name installer eu.gcr.io/gitpod-core-dev/build/installer:main.2071
    docker cp installer:/app/installer ./installer
    docker rm -f installer

    Note that the build is there on the end of that first command.

  3. The next command will generate a base configuration file:

    ./installer init > gitpod.config.yaml

    From here you can open your gitpod.conf.yaml file to customize. At minimum make sure to set the domain: option to be your domain and update the workspace.runtime.containerdRuntimeDir and workspace.runtime.containerdSocket values.

    containerdRuntimeDir: /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io
    containerdSocket: /run/k3s/containerd/containerd.sock

    I also setup the authProviders: section as I have a GitHub Enterprise instance. It should be noted the authProviders may be moved to a secret in the future.

    You can also setup external cluster dependencies such as an external Docker registry, database, and object storage. In order to use external dependencies you'll need to set the inCluster setting for them to true. For example if using an external database, the database section would look like:

    database:
      inCluster: false
      external:
        certificate:
          kind: secret
          name: database-token

    Your database-token secret needs to have the following key/pair values:

    • encryptionKeys - database encryption key
    • host - IP or URL of the database
    • password - database password
    • port - database port, usually 3306
    • username - database username

  4. Next we'll need to install cert-manager into our Kubernetes cluster. This is required even if you're providing your own TLS certificate for Gitpod. cert-manager will be used to generate certificates for internal Gitpod services.

    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    helm upgrade \
        --atomic \
        --cleanup-on-fail \
        --create-namespace \
        --install \
        --namespace='cert-manager' \
        --reset-values \
        --set installCRDs=true \
        --set 'extraArgs={--dns01-recursive-nameservers-only=true,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
        --wait \
        cert-manager \
        jetstack/cert-manager
  5. Next I create a new namespace in Kubernetes for Gitpod. You don't have to do this, Gitpod will use the namespace 'default' by default. I'm sort of an organizational freak so I prefer to keep everything Gitpod in its own namespace. The following command creates a new namespace, gitpod.

    kubectl create namespace gitpod
  6. Since I have my own TLS certificate for Gitpod I manually created the https-certificates secret in Kubernetes. If you're familiar with doing this via the command-line go for it! I used Rancher. Of note, you must ensure your TLS certificate contains domain.com, *.domain.com and *.ws.domain.com. If you'd like to create the 'https-certificates' secret via Rancher you may following these steps:

    First I bring up https://rancher.mydomain.com, and login as necessary. You should see a screen that has the following:

    Rancher Home

    Click on 'local'. From the sidebar on the left side click on 'Storage' and then 'Secrets'.

    Rancher > Storage > Secrets

    Then click on the blue 'Create' button.

    Rancher > Storage > Secrets > Create

    On the page that comes up with 4-5 boxes, click on the TLS Certificate box.

    Rancher > Storage > Secrets > Create > TLS Certificate

    On the next page ensure the Namespace is set to 'gitpod' and the Name is set to 'https-certificates'.

    Rancher > Storage > Secrets > Create > TLS Certificate

    Fill in the Private Key and Certificate files accordingly. I used a certificate from Let's Encrypt so I pasted in the contents of the fullchain.pem file into the Certificate field. Hit the blue 'Create' button when you're set.

  7. Next validate the Gitpod configuration file:

    ./installer validate config --config gitpod.config.yaml

    Hopefully everything looks good here and "valid": true is returned.

  8. Next we need to check that our Kubernetes cluster is setup properly for Gitpod:

    ./installer validate cluster --kubeconfig ~/.kube/config --config gitpod.config.yaml

    You may see any error about the https-certificates secret not being found, but that should be fine to ignore.

  9. Render the Gitpod YAML file. This is what will be used to deploy Gitpod to your Kubernetes cluster.

    ./installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
  10. TIME TO DEPLOY GITPOD!!

    kubectl apply -f gitpod.yaml

    You can watch as things get setup and deployed in Rancher or on the command-line you can run:

    watch -n5 kubectl get all -n gitpod

    or if you just want to watch the pods:

    watch -n5 kubectl get pods -n gitpod

    If everything is all good and happy, you should see all of the pods should show as Running.

  11. If you run kubectl get svc -n gitpod you may notice the proxy service doesn't have an External IP. This is normal since we don't have anything running to assign out external IPs at the moment.

    proxy    LoadBalancer    10.43.230.187    <none>    80:32673/TCP,443:32262/TCP,9500:32178/TCP    4d19h

Installing MetalLB

This step is 100% optional as well if you already have a service that assigns external IPs. If you don't and want something quick and easy then let's have a look at MetalLB.

Again from our system which has access to our Kubernetes cluster (via kubectl) we'll setup MetalLB.

  1. Install MetaLB via Manifests:

    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml

    This will install MetalLB under the metallb-system namespace.

  2. This next step we can do either via Rancher or on the command-line. I've done it via Rancher myself but the choice is yours. In Rancher I click on the 'Import YAML' button found in the upper right corner:

    Rancher > Import YAML

    Next I paste in the following as a template:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        address-pools:
        - name: default
          protocol: layer2
          addresses:
          - 192.168.1.240-192.168.1.250

    I changed the last line to be the IP range that I want MetalLB to hand out. In my case this is 192.168.102.3-192.168.102.253. Make sure to select 'metallb-system' from the 'Default Namespace' dropdown menu. Hit the blue 'Import' button when you're all set.

  3. I've also setup IP blocks on my worker nodes. I gave 192.168.102.10 - 192.168.102.19 to kubenode1.mydomain.com, 192.168.102.20 - 192.168.102.29 to kubenode2.mydomain.com, and 192.168.102.30 - 192.168.102.39 to kubenode3.mydomain.com. I set that up within their /etc/netplan/00-installer-config.yaml file.
  4. Back on the command-line if you run kubectl get svc -n gitpod you should see the proxy service now has an IP address:

    proxy    LoadBalancer    10.43.230.187    192.168.103.11    80:32673/TCP,443:32262/TCP,9500:32178/TCP    4d19h