As some of you may be aware Gitpod no longer supports self-hosting Gitpod. To be clear this means that Gitpod no longer sells licenses for self-hosting Gitpod and no longer officially supports anyone who self-hosts Gitpod. They do however provide a community-powered Discord channel where Gitpodders chime in from time to time.
In my last post about setting up Gitpod I talked about using the new installer to install Gitpod on k3s Kubernetes cluster. This post will be very similar however it will focus on setting up Gitpod as opposed to the entire cluster and other components and resources. I recommend referring back to that post if you want a deeper look at how I configured my cluster.
For reference I have a single Dell R620 with 128GBs of RAM and about 5TBs of disk space in RAID6. Since this is just an at-home, learning cluster this is sufficient enough for me. I created 4 VMs, 1 master node which had 4 CPU cores, 8GBs of RAM and 120GBs of disk space, and 3 worker nodes with 8 CPU cores each, 16GBs of RAM each and 200GBs of disk space each. Each VM has Ubuntu 22.04 Server with k3s. I also use MetalLB.
- To start off my Gitpod installation I first set my master node to be non-schedulable. This allows my master node to just act as a control plane and not have any other workloads.
kubectl taint node master-node.domain.com k3s-controlplane=true:NoSchedule
- Install
cert-manager
next. This is necessary to provision TLS certificates for your instance.helm repo add jetstack https://charts.jetstack.io helm repo update helm upgrade \ --atomic \ --cleanup-on-fail \ --create-namespace \ --install \ --namespace='cert-manager' \ --reset-values \ --set installCRDs=true \ --set 'extraArgs={--dns01-recursive-nameservers-only=true,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' \ --wait \ cert-manager \ jetstack/cert-manager
- I use a domain that is setup with Cloudflare DNS so I used the directions here. I setup an
Issuer
, aSecret
for my Cloudflare token and aCertificate
. - I then added the necessary labels to my worker nodes so that Gitpod could utilize them:
for i in node1.mydomain.com node2.mydomain.com node3.mydomain.com ; do kubectl label node $i gitpod.io/workload_meta=true gitpod.io/workload_ide=true gitpod.io/workload_workspace_services=true gitpod.io/workload_workspace_regular=true gitpod.io/workload_workspace_headless=true ; done
- Next visit the Werft site that Gitpod has setup. This shows all the builds that have run for Gitpod and other various components. In the search box input -
gitpod-build-main.
. This should bring up a list of the recent Gitpod builds. Ensure to select the latest one that has a green checkmark. This means the build process was successful so we should see that same success in deploying our instance. - If you haven't already clone the
gitpod-io/gitpod
repository.git clone https://github.com/gitpod-io/gitpod.git
- Navigate into the cloned repository and go into the
install/installer
directory. Once you're in that directory run the following commands. Ensure that within the first command you update themain:6500
part to reflect whatever build you find on the Werft website.docker create -ti --name installer eu.gcr.io/gitpod-core-dev/build/installer:main.6500 docker cp installer:/app/installer ./installer docker rm -f installer
This will create a new installer for you using that build of Gitpod.
- Next create the
gitpod
namespace:kubectl create namespace gitpod
- Create the base
gitpod.config.yaml
file by running:./installer init > gitpod.config.yaml
- Using your favorite text editor open the new configuration file and update it to match your set up. You need to at least set the
domain
,workspace.runtime.containerdRuntimeDir
andworkspace.runtime.containerdSocket
. Since we're using k3s we should set those runtime values to:containerdRuntimeDir: /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io containerdSocket: /run/k3s/containerd/containerd.sock
I also setup the
authProviders
as aSecret
so I could use my GitHub Enterprise instance to authenticate to my Gitpod instance with. Here's what the section ingitpod.config.yaml
looks like as well as theSecret
.authProviders: - kind: secret name: github-enterprise
and the contents for the
Secret
:id: GitHub Enterprise host: github-enterprise.com type: GitHub oauth: clientId: clientSecret: callBackUrl:
You'll need to fill in the details with your own information. You don't need to do this, when you bring up your Gitpod instance in your web browser after deploying it you'll be required to setup a SCM if you haven't done so using the above method.
- You can validate your
gitpod.config.yaml
configuration file using:./installer validate config --config gitpod.config.yaml
- Once your configuration has been validated you can validate that your cluster is setup properly using the following command:
./installer validate cluster --kubeconfig ~/.kube/config --config gitpod.config.yaml
- If everything from the previous commands check out we'll generate the
gitpod.yml
file which contains all the necessary resources for Gitpod to run../installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
- Run the following command to deploy Gitpod to your cluster:
kubectl apply -f gitpod.yaml
You can run
watch -n5 kubectl get all -n gitpod
to watch the namespace and its resources. - Once everything has been deployed you should be able to visit your Gitpod instance in your web browser and start using it!
Notes
- As noted above please join us in the
#self-hosted-discussions
channel on the Gitpod Discord server. I try to keep an eye on the channel and follow up on as many threads as I can. - If you're experiencing an issue with the MinIO pod not starting up please leave a comment below. I didn't include my notes about it in this post as I am not sure if it affects new installations or just upgrades. I also haven't seen other uses having issues with it but if it's more widespread I'd be happy to update this post with information on how I resolved the problems.