This guide will show you how to fetch a Let's Encrypt SSL certificate, run Gitpod as a Docker container using Docker Compose and configure it so it works with Traefik v2.x.
Previously in order to run Gitpod you needed to use either Google Cloud Platform which I found to be prohibitively expensive or use your own vanilla Kubernetes setup. Going the vanilla Kubernetes route was fun and a great learning experience but it required running another server that used more eletricity and generated more heat to be running in my house. Thankfully Gitpod can now be run in as a single Docker container!
The information and directions I found were pretty good and got me started however there was some changes in how I deployed it. I figured it would be worth sharing my expenience in running Gitpod via a single Docker container with you all.
For what it's worth I deployed this on a Dell R620 with 2x Intel Xeon E5-2640s, 220GBs RAM, a 500GB SSD boot drive and 5.6TB RAID5 for data. Obviously you probably don't need all that if you're just running Gitpod but I am running about ~70 other Docker containers with other services.
First clone the following Git repository as such:
git clone https://github.com/wilj/gitpod-docker-k3s.git
This was a great starting point for me running Gitpod via Docker, however there was a few files I had to update. The first one was the create-certs.sh
file. I use NS1 to manage the DNS for my Gitpod domain. My create-certs.sh
file looks like this:
#!/bin/bash
set -euox
EMAIL=$1
WORKDIR=$(pwd)/tmpcerts
mkdir -p $WORKDIR
sudo docker run -it --rm --name certbot \
-v $WORKDIR/etc:/etc/letsencrypt \
-v $WORKDIR/var:/var/lib/letsencrypt \
-v $(pwd)/secrets/nsone.ini:/etc/nsone.ini:ro \
certbot/dns-nsone certonly \
-v \
--agree-tos --no-eff-email \
--email $EMAIL \
--dns-nsone \
--dns-nsone-credentials /etc/nsone.ini \
--dns-nsone-propagation-seconds 30 \
-d mydomain.com \
-d \*.mydomain.com \
-d \*.ws.mydomain.com
sudo find $WORKDIR/etc/live -name "*.pem" -exec sudo cp -v {} $(pwd)/certs \;
sudo chown -Rv $USER:$USER $(pwd)/certs
chmod -Rv 700 $(pwd)/certs
sudo rm -rfv $WORKDIR
openssl dhparam -out $(pwd)/certs/dhparams.pem 2048
You'll see I adjust the top bit to get rid of the DOMAIN
variable and I swapped out the instances where it was used ($DOMAIN
) to my actual domain. I also had to escape the *
being used in the docker run
command. I did this because the script wouldn't execute properly with the askerisks in place.
Next you'll likely need to create a secrets file to store your API key for the DNS manager you're using, in my case NS1. I created this in the secrets/
directory as nsone.ini
. It looks something like this (obviously the key is made up here):
# NS1 API credentials used by Certbot
dns_nsone_api_key = dhsjkas8d7sd7f7s099n
Now we can generate our certificates by running this script. The script will create (and remove once done) the necessary DNS records and fetch an SSL certificate from Let's Encrypt.
./create-certs.sh <email>
Once this is done, you'll see the SSL certificate and files in your ./certs
directory. There should be five (5) files:
cert.pem
chain.pem
dhparams.pem
fullchain.pem
privkey.pem
Now you can either run the setup.sh
file as such:
./setup.sh <domain> <dns server>
or setup a new service in a docker-compose.yml
file. The following is what I have in my Docker compose file, however I recommend you review it carefully so it works in your environment, don't just copy and paste.
gitpod:
image: eu.gcr.io/gitpod-core-dev/build/gitpod-k3s:latest
container_name: gitpod
labels:
- traefik.http.routers.gitpod.rule=Host(`domain.com`) || HostRegexp(`domain.com`,`{subdomain:[A-Za-z0-9]+}.domain.com`,`{subdomain:[A-Za-z0-9-_]+}.ws.domain.com`)
- traefik.http.routers.gitpod.entrypoints=websecure
- traefik.http.routers.gitpod.service=gitpod
- traefik.http.routers.gitpod.tls=true
- traefik.http.services.gitpod.loadbalancer.server.port=443
- traefik.http.services.gitpod.loadbalancer.server.scheme=https
environment:
- DOMAIN=domain.com
- DNSSERVER=8.8.8.8
volumes:
- /etc/localtime:/etc/localtime:ro
- /run/containerd/containerd.sock:/run/containerd/containerd.sock
- ${DOCKER_CONF_DIR}/gitpod/values:/values
- ${DOCKER_CONF_DIR}/gitpod/certs:/certs
- gitpod-docker:/var/gitpod/docker
- gitpod-docker-registry:/var/gitpod/docker-registry
- gitpod-minio:/var/gitpod/minio
- gitpod-mysql:/var/gitpod/mysql
- gitpod-workspaces:/var/gitpod/workspaces
networks:
- production
depends_on:
- traefik
restart: unless-stopped
cap_add:
- SYS_PTRACE
volumes:
gitpod-docker:
driver: local
gitpod-docker-registry:
driver: local
gitpod-minio:
driver: local
gitpod-mysql:
driver: local
gitpod-workspaces:
driver: local
networks:
production:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/16
gateway: 172.18.0.1
You'll see I have ${DOCKER_CONF_DIR}
there which is an environmental value (stored in my .env
file) which points to /mnt/data/docker/config/gitpod/
on my server. You can setup something similar or hardcode it in your docker-compose file. Whatever you do make sure your copy those five (5) SSL files I previously mentioned into a new directory named certs
within that directory. So for example I have my SSL certificate files located at /mnt/data/docker/config/gitpod/certs
.
You'll also need to create another directory within the gitpod
directory called values
. So for me this is /mnt/data/docker/config/gitpod/values
. Within there we're going to create three (3) more YAML files.
auth-providers.yaml
minio-secrets.yaml
mysql.yaml
In the first file auth-providers.yaml
, I have setup GitHub OAuth application details. In order to login to your Gitpod instance you'll need to set this file up. I have a GitHub Enterprise instance that I will be using for authentication. My auth-providers.yaml
file looks like this:
authProviders:
- id: "GitHub"
host: "githubenterprise.com"
type: "GitHub"
oauth:
clientId: "7d73h3b933829d9"
clientSecret: "asu8a9sf9h89a9892n2n209201934b8334uhnraf987"
callBackUrl: "https://gitpod-domain.com/auth/github/callback"
settingsUrl: "https://githubenterprise.com/settings/connections/applications/7d73h3b933829d9"
description: ""
icon: ""
The above is an example and will need to be adjusted accordingly. You can also set it up with GitHub.com, GitLab.com or your own self-hosted GitLab instance.
The next file minio-secrets.yaml
needs to contain a username and password that will be used for the MinIO instance that will run within the k3s Kubernetes cluster in the Gitpod Docker container. I used the following command to generate some random strings:
openssl rand -hex 32
I ran that twice, once to create a string for the username and again for the password. Your minio-secrets.yaml
should look like this:
minio:
accessKey: 9d0d6aa1c9d9981fadc103a9e3a5bb56929df51de22439ab1410249c879429b1
secretKey: 7f0f8ccd7219a1ef87cd30d33751469a491c54df062c8ca28517602576725276
obviously replacing those strings with whatever came from running that openssl
command.
Now we need to create the mysql.yaml
with the following contents:
db:
host: mysql
port: 3306
password: test
Once that is all set we can start up the Gitpod Docker container! I run:
docker-compose -p main up -d gitpod
You can adjust this command to your environment, for example you may not need the -p main
bit. Once you run the above command it'll take a bit for things to get all setup and running. What I do (since I'm impatient :stuck_out_toungue_closed_eyes:) is run this command which will show you the status of the k3s Kubernetes cluster being setup in the Gitpod Docker container:
docker exec gitpod kubectl get all --all-namespaces
You should see output similar too:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/local-path-provisioner-7c458769fb-4zxlq 1/1 Running 0 13d
kube-system pod/coredns-854c77959c-pqq94 1/1 Running 0 13d
kube-system pod/metrics-server-86cbb8457f-84kjc 1/1 Running 0 13d
default pod/svclb-proxy-6gxms 2/2 Running 0 13d
default pod/blobserve-6bdd97d5dc-dzfnd 1/1 Running 0 13d
default pod/dashboard-859c9bf868-mhvv7 1/1 Running 0 13d
default pod/ws-manager-84486dc88c-p9dqt 1/1 Running 0 13d
default pod/ws-scheduler-ff4d8d9dd-2wf28 1/1 Running 0 13d
default pod/registry-facade-pvnpn 1/1 Running 0 13d
default pod/content-service-656fd85977-dkrvp 1/1 Running 0 13d
default pod/theia-server-568fb48db5-fhknk 1/1 Running 0 13d
default pod/registry-65ff9d5744-pxx96 1/1 Running 0 13d
default pod/minio-84fcc5d488-zcdj8 1/1 Running 0 13d
default pod/ws-proxy-5d5cd8fc64-tp97v 1/1 Running 0 13d
default pod/image-builder-7d97c4b4fb-wdb9l 2/2 Running 0 13d
default pod/proxy-85b684df9b-fvl77 1/1 Running 0 13d
default pod/ws-daemon-xdnmg 1/1 Running 0 13d
default pod/node-daemon-rn5xs 1/1 Running 0 13d
default pod/messagebus-f98948794-gqcqp 1/1 Running 0 13d
default pod/mysql-7cbb9c9586-l8slq 1/1 Running 0 13d
default pod/gitpod-helm-installer 0/1 Completed 0 13d
default pod/ws-manager-bridge-69856554ff-wxqw9 1/1 Running 0 13d
default pod/server-84cf48b766-pt9gp 1/1 Running 0 13d
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 13d
kube-system service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 13d
kube-system service/metrics-server ClusterIP 10.43.97.99 <none> 443/TCP 13d
default service/dashboard ClusterIP 10.43.154.252 <none> 3001/TCP 13d
default service/ws-manager ClusterIP 10.43.49.193 <none> 8080/TCP 13d
default service/theia-server ClusterIP 10.43.200.184 <none> 80/TCP 13d
default service/server ClusterIP 10.43.138.124 <none> 3000/TCP,9500/TCP 13d
default service/ws-proxy ClusterIP 10.43.245.248 <none> 8080/TCP 13d
default service/mysql ClusterIP 10.43.2.56 <none> 3306/TCP 13d
default service/minio ClusterIP 10.43.243.101 <none> 9000/TCP 13d
default service/registry ClusterIP 10.43.10.110 <none> 443/TCP 13d
default service/registry-facade ClusterIP 10.43.194.106 <none> 3000/TCP 13d
default service/messagebus ClusterIP 10.43.135.179 <none> 5672/TCP,25672/TCP,4369/TCP,15672/TCP 13d
default service/blobserve ClusterIP 10.43.36.3 <none> 4000/TCP 13d
default service/content-service ClusterIP 10.43.112.64 <none> 8080/TCP 13d
default service/image-builder ClusterIP 10.43.241.227 <none> 8080/TCP 13d
default service/db ClusterIP 10.43.34.57 <none> 3306/TCP 13d
default service/proxy LoadBalancer 10.43.184.145 172.18.0.72 80:31895/TCP,443:30753/TCP 13d
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
default daemonset.apps/svclb-proxy 1 1 1 1 1 <none> 13d
default daemonset.apps/registry-facade 1 1 1 1 1 <none> 13d
default daemonset.apps/ws-daemon 1 1 1 1 1 <none> 13d
default daemonset.apps/node-daemon 1 1 1 1 1 <none> 13d
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/local-path-provisioner 1/1 1 1 13d
kube-system deployment.apps/coredns 1/1 1 1 13d
kube-system deployment.apps/metrics-server 1/1 1 1 13d
default deployment.apps/blobserve 1/1 1 1 13d
default deployment.apps/dashboard 1/1 1 1 13d
default deployment.apps/ws-manager 1/1 1 1 13d
default deployment.apps/ws-scheduler 1/1 1 1 13d
default deployment.apps/content-service 1/1 1 1 13d
default deployment.apps/theia-server 1/1 1 1 13d
default deployment.apps/minio 1/1 1 1 13d
default deployment.apps/ws-proxy 1/1 1 1 13d
default deployment.apps/image-builder 1/1 1 1 13d
default deployment.apps/proxy 1/1 1 1 13d
default deployment.apps/registry 1/1 1 1 13d
default deployment.apps/messagebus 1/1 1 1 13d
default deployment.apps/mysql 1/1 1 1 13d
default deployment.apps/ws-manager-bridge 1/1 1 1 13d
default deployment.apps/server 1/1 1 1 13d
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/local-path-provisioner-7c458769fb 1 1 1 13d
kube-system replicaset.apps/coredns-854c77959c 1 1 1 13d
kube-system replicaset.apps/metrics-server-86cbb8457f 1 1 1 13d
default replicaset.apps/blobserve-6bdd97d5dc 1 1 1 13d
default replicaset.apps/dashboard-859c9bf868 1 1 1 13d
default replicaset.apps/ws-manager-84486dc88c 1 1 1 13d
default replicaset.apps/ws-scheduler-ff4d8d9dd 1 1 1 13d
default replicaset.apps/content-service-656fd85977 1 1 1 13d
default replicaset.apps/theia-server-568fb48db5 1 1 1 13d
default replicaset.apps/minio-84fcc5d488 1 1 1 13d
default replicaset.apps/ws-proxy-5d5cd8fc64 1 1 1 13d
default replicaset.apps/image-builder-7d97c4b4fb 1 1 1 13d
default replicaset.apps/proxy-85b684df9b 1 1 1 13d
default replicaset.apps/registry-65ff9d5744 1 1 1 13d
default replicaset.apps/messagebus-f98948794 1 1 1 13d
default replicaset.apps/mysql-7cbb9c9586 1 1 1 13d
default replicaset.apps/ws-manager-bridge-69856554ff 1 1 1 13d
default replicaset.apps/server-84cf48b766 1 1 1 13d
A lot of the items will likely read 'Creating' or 'Initializing'. I didn't think ahead enough to grab the output when my instance was actually being setup so the output above is what mine looks like when every thing is done.
If everything went smoothly your output from that command should look like mine from above. You should also be able to visit your Gitpod instance in your web browser.
Assuming you've setup your authentication provider properly you should be able to login and start setting up workspaces!
While everything is working there's a couple things I want to figure out how to do to have a "better" instance of Gitpod running.
- Move the Docker volumes onto my 5.6TB RAID5. Right now they're sititng on my SSD which I prefer to use for my boot drive and mostly static files.
- Figure out how to set a better password for MySQL.
- Figure out how to run a newer version of Gitpod. Right now it's using the
latest
tag which is version 0.7.0. - I seem to have issues redeploying the Docker container while leaving the volumes as-is. This causes me to have to start completely fresh which is no good.