Restoring Automatic Backup Saves on PSVita

Hopefully you'll never need to perform action but if you do the following steps will allow you to revert a save on your PSVita to automatically created back up. I personally had to do this when my save file for Final Fantasy X HD got corrupted. I lost about 30 minutes of play time but that was better than having to start all over again.

  1. First check to see if an automatic backup exists. They should be located at ux0:/user/xx/savedata_backup. You can perform this step and the next two from VitaShell.
  2. Create a backup of the existing backup file, these are located at ux0:/user/savedata/xx/savedata/$TITLEID. Ensure to change out $TITLEID with the games actual title ID.
  3. Next navigate to the games save directory - ux0:/user/xx/savedata/$TITLEID (again remembering to change $TITLEID to the games title ID). Go into the sce_sys directory and remove the sdslot.dat and keystone files.
  4. Exit VitaShell.
  5. Start up the game and you should receive a message about the save file being corrupt and if you'd like to restore. Answer 'Yes' here.

Source.

Self-Hosting Gitpod in 2023

As some of you may be aware Gitpod no longer supports self-hosting Gitpod. To be clear this means that Gitpod no longer sells licenses for self-hosting Gitpod and no longer officially supports anyone who self-hosts Gitpod. They do however provide a community-powered Discord channel where Gitpodders chime in from time to time.

In my last post about setting up Gitpod I talked about using the new installer to install Gitpod on k3s Kubernetes cluster. This post will be very similar however it will focus on setting up Gitpod as opposed to the entire cluster and other components and resources. I recommend referring back to that post if you want a deeper look at how I configured my cluster.

For reference I have a single Dell R620 with 128GBs of RAM and about 5TBs of disk space in RAID6. Since this is just an at-home, learning cluster this is sufficient enough for me. I created 4 VMs, 1 master node which had 4 CPU cores, 8GBs of RAM and 120GBs of disk space, and 3 worker nodes with 8 CPU cores each, 16GBs of RAM each and 200GBs of disk space each. Each VM has Ubuntu 22.04 Server with k3s. I also use MetalLB.

  1. To start off my Gitpod installation I first set my master node to be non-schedulable. This allows my master node to just act as a control plane and not have any other workloads.
    kubectl taint node master-node.domain.com k3s-controlplane=true:NoSchedule
  2. Install cert-manager next. This is necessary to provision TLS certificates for your instance.
    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    helm upgrade \
        --atomic \
        --cleanup-on-fail \
        --create-namespace \
        --install \
        --namespace='cert-manager' \
        --reset-values \
        --set installCRDs=true \
        --set 'extraArgs={--dns01-recursive-nameservers-only=true,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
        --wait \
        cert-manager \
        jetstack/cert-manager
  3. I use a domain that is setup with Cloudflare DNS so I used the directions here. I setup an Issuer, a Secret for my Cloudflare token and a Certificate.
  4. I then added the necessary labels to my worker nodes so that Gitpod could utilize them:
    for i in node1.mydomain.com node2.mydomain.com node3.mydomain.com ; do kubectl label node $i gitpod.io/workload_meta=true gitpod.io/workload_ide=true gitpod.io/workload_workspace_services=true gitpod.io/workload_workspace_regular=true gitpod.io/workload_workspace_headless=true ; done
  5. Next visit the Werft site that Gitpod has setup. This shows all the builds that have run for Gitpod and other various components. In the search box input - gitpod-build-main.. This should bring up a list of the recent Gitpod builds. Ensure to select the latest one that has a green checkmark. This means the build process was successful so we should see that same success in deploying our instance.
  6. If you haven't already clone the gitpod-io/gitpod repository.
    git clone https://github.com/gitpod-io/gitpod.git
  7. Navigate into the cloned repository and go into the install/installer directory. Once you're in that directory run the following commands. Ensure that within the first command you update the main:6500 part to reflect whatever build you find on the Werft website.
    docker create -ti --name installer eu.gcr.io/gitpod-core-dev/build/installer:main.6500
    docker cp installer:/app/installer ./installer
    docker rm -f installer

    This will create a new installer for you using that build of Gitpod.

  8. Next create the gitpod namespace:
    kubectl create namespace gitpod
  9. Create the base gitpod.config.yaml file by running:
    ./installer init > gitpod.config.yaml
  10. Using your favorite text editor open the new configuration file and update it to match your set up. You need to at least set the domain, workspace.runtime.containerdRuntimeDir and workspace.runtime.containerdSocket. Since we're using k3s we should set those runtime values to:
    containerdRuntimeDir: /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io
    containerdSocket: /run/k3s/containerd/containerd.sock

    I also setup the authProviders as a Secret so I could use my GitHub Enterprise instance to authenticate to my Gitpod instance with. Here's what the section in gitpod.config.yaml looks like as well as the Secret.

    authProviders:
      - kind: secret
        name: github-enterprise

    and the contents for the Secret:

    id: GitHub Enterprise
    host: github-enterprise.com
    type: GitHub
    oauth:
      clientId:
      clientSecret:
      callBackUrl:

    You'll need to fill in the details with your own information. You don't need to do this, when you bring up your Gitpod instance in your web browser after deploying it you'll be required to setup a SCM if you haven't done so using the above method.

  11. You can validate your gitpod.config.yaml configuration file using:
    ./installer validate config --config gitpod.config.yaml
  12. Once your configuration has been validated you can validate that your cluster is setup properly using the following command:
    ./installer validate cluster --kubeconfig ~/.kube/config --config gitpod.config.yaml
  13. If everything from the previous commands check out we'll generate the gitpod.yml file which contains all the necessary resources for Gitpod to run.
    ./installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
  14. Run the following command to deploy Gitpod to your cluster:
    kubectl apply -f gitpod.yaml

    You can run watch -n5 kubectl get all -n gitpod to watch the namespace and its resources.

  15. Once everything has been deployed you should be able to visit your Gitpod instance in your web browser and start using it!

Notes

  • As noted above please join us in the #self-hosted-discussions channel on the Gitpod Discord server. I try to keep an eye on the channel and follow up on as many threads as I can.
  • If you're experiencing an issue with the MinIO pod not starting up please leave a comment below. I didn't include my notes about it in this post as I am not sure if it affects new installations or just upgrades. I also haven't seen other uses having issues with it but if it's more widespread I'd be happy to update this post with information on how I resolved the problems.

Fixing Misaligned Text in the snes-mini EmulationStation Theme

I was made aware of this theme in a Reddit thread (which I can no longer find) however upon installing it I noticed that the text in the listing of games under a console was misaligned.

Misaligned Text in snes-mini Theme

Thankfully a quick Google search turned up this issue on GitHub. One commenter was able to resolve the issue by editing one of the layout files. Applying the fix and restarting EmulationStation on my RetroPie did the trick and now text is properly aligned! If you're using a 1920x1080 resolution (which is default) you can replace the contents of the /etc/emulationstation/themes/snes-mini/layouts/1920x1080.xml with the following to fix the misaligning text issue:

<!--
author: ruckage
-->

<theme>
  <view name="basic,detailed,video">
    <textlist name="gamelist">
      <pos>${listx} 0.19537037037037</pos>
      <size>${listWidth} 0.62962962962963</size>
      <lineSpacing>1.375</lineSpacing>
      <selectorHeight>0.0814814814814815</selectorHeight>
      <selectorOffsetY>-0.0111111111111111</selectorOffsetY>
    </textlist>
  </view>
</theme>

If you're not using 1920x1080, you can head on over to this PR on GitHub to see the fixes for other resolutions.

This is definitely one of the better themes I've seen for EmulationStation, the rest look pretty awful in my opinion.

Fixed snes-mini Theme

Installing cPanel & WHM DNSOnly to a 1GB DO Droplet

Within the past 3 years it appears DigitalOcean made some adjustments to their droplet environments, particularly in how memory is displayed. Now 1GB of memory shows up as 828628 kB instead of 1014776 kB which is preventing DNSOnly from being installed. Fortunately there's a way to get around this so one can still install DNSOnly on a 1GB droplet!

First provision your droplet. I provisioned mine with CentOS 8 since DigitalOcean doesn't offer AlmaLinux directly. If you go this route, you can use a nifty script provided by the AlmaLinux team which will convert you from CentOS 8.x over to AlmaLinux 8.x.

Nonetheless once you've got your base system set, we'll need to run the following command to move into the /home directory and then fetch the DNSOnly install script.

cd /home
curl -o latest-dnsonly -L https://securedownloads.cpanel.net/latest-dnsonly

Next we'll want to run the script with the --keep flag so the install files are kept.

sh latest-dnsonly --keep

Once this is done running you should now see a new directory installd. Go into that directory and open the Installer.pm file in your editor of choice. Here I made a slight adjustment so it bails out if instead the total memory is greater than the minimum memory.

--- Installer.pm.orig   2021-11-19 11:46:36.000000000 -0600
+++ Installer.pm    2022-01-03 12:53:12.000000000 -0600
@@ -293,7 +293,7 @@
     my $total_memory = $self->get_total_memory();
     my $minmemory    = $self->distro_major == 6 ? 768 : 1_024;

-    if ( $total_memory < $minmemory ) {
+    if ( $total_memory > $minmemory ) {
         ERROR("cPanel, L.L.C. requires a minimum of $minmemory MB of RAM for your operating system.");
         FATAL("Increase the server's total amount of RAM, and then reinstall cPanel & WHM.");
     }

Now, while still in the installd directory run the following command to install cPanel & WHM DNSOnly:

./bootstrap-dnsonly

The process should take 10-15 minutes, once completed you now have DNSOnly installed on your DigitalOcean droplet that has 1GB of RAM! 🎉 If you found this guide useful and want to use DigitalOcean please use this link, it helps me afford my infrastructure!

Resources

Setting Up Gitpod on k3s Using the NEW Installer

For my new Gitpod instance I used 4 VMs. The first VM is going to be the master/controller node and the other three will be my worker nodes.

I created my controller node with 2 CPU cores, 4GBs of RAM and 80GBs of disk space. I used Ubuntu Server 20.04.3 for the operating system. During the install process I setup my partitions as follows:

/boot - 1G
/ - remaining space

Once the install of Ubuntu completed I started setting the system up as follows:

Setting Up the Kubernetes Controller

  1. For whatever reason Ubuntu doesn't allow '.' in the hostname when going through the installation process so the first step is to fix the hostname in /etc/hostname. I set mine to kubemaster.mydomain.com. Then run hostname -F /etc/hostname.

  2. This step is optional. My Kubernetes cluster controller and work nodes all have private IP addresses. I've allocated the IP address block 192.168.102.1 - 192.168.102.253 to the cluster. This is not required if your systems have public IPs and resolvable hostnames. I did this so each of the systems in my cluster know how to properly communicate with each other. My hosts file looked like this:

    127.0.1.1 kubemaster.mydomain.com kubemaster
    192.168.103.1 kubemaster.mydomain.com
    192.168.103.2 kubenode1.mydomain.com
    192.168.103.3 kubenode2.mydomain.com
    192.168.103.4 kubenode3.mydomain.com
  3. Next set the timezone of the system to use your timezone ( in my case it's America/Chicago) - dpkg-reconfigure tzdata.

  4. Run apt -y update && apt -y upgrade to update the software on the system.

  5. Now we'll install MariaDB 10.5. This is used to store some information from k3s.

    apt -y install curl software-properties-common dirmngr
    apt-key adv --fetch-keys 'https://mariadb.org/mariadb_release_signing_key.asc'
    add-apt-repository 'deb [arch=amd64,arm64,ppc64el,s390x] https://mirror.rackspace.com/mariadb/repo/10.5/ubuntu focal main'
    apt -y install mariadb-server mariadb-client
    systemctl status mariadb
    systemctl enable mariadb
    mysql_secure_installation
  6. Create the database for k3s. You'll want to login using the root MySQL password you set when you ran mysql_secure_installation in the previous step.

    mysql -u root -p
    CREATE DATABASE k3s CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
    CREATE USER 'k3s'@'localhost' IDENTIFIED BY '$PASSWORD';
    GRANT ALL PRIVILEGES ON k3s.* TO 'k3s'@'localhost';
    FLUSH PRIVILEGES;
    quit

    Make sure to actually change out $PASSWORD with a secure password.

  7. I install k3s:

    curl -sfL https://get.k3s.io | sh -s - --datastore-endpoint="mysql://k3s:$PASSWORD@tcp(localhost:3306)/k3s" --write-kubeconfig-mode 644 --disable traefik --disable servicelb

    Make sure you swap out $PASSWORD with the password you set for the k3s MySQL user in the previous step.

  8. Edit /etc/fstab and comment out the line for swap.

    # /swap.img none swap sw 0 0
  9. Reboot the server (reboot). Once rebooted, log back into the system and run free -mh to make sure it shows swap like so:

    Swap:            0B          0B          0B

    If it does, then remove the /swap.img file.

  10. Next we need to grab the k3s node-token and save it somewhere safe. It'll be a long string of characters and numbers.

    cat /var/lib/rancher/k3s/server/node-token
  11. I also have another system that I connect to my cluster from. If you do too grab the configuration file from the cluster:

    cat /etc/rancher/k3s/k3s.yaml

    Take the contents there and place it into the /home/user/.kube/config file if you're on Linux or /Users/user/.kube/config file if on macOS. You may need to create the .kube directory first. You'll need to edit the file to point to the IP address of your Kubernetes controller. You should find a line inside the config file that looks like:

    server: https://127.0.0.1:6443

    You'll want to change 127.0.0.1 to the IP or hostname of your controller. Save and exit the file.

  12. Run kubectl get nodes. This should return something like:

     NAME                        STATUS   ROLES                  AGE     VERSION
     kubemaster.mydomain.com    Ready    control-plane,master   4d23h   v1.21.7+k3s1

Setting Up the Kubernetes Worker Nodes

The next three VMs are going to be our worker nodes where pods can be spun up and such. I created each of these with 8 CPU cores, 8GBs of RAM and 250GBs of disk space. I've used Ubuntu Server 20.04.3 on these as well. During the install process I setup my partitions as follows:

/boot - 1G
/ - remaining space

Once the operating system has finished installing we can go through the following steps to finish setting up the system.

  1. First I fix the hostname in /etc/hostname. I set mine to kubenodeX.mydomain.com where 'X' is a number. So for example, on my first worker node, the hostname is kubenode1.mydomain.com, the second worker is kubenode2.mydomain.com and so on. Once that file has been edited and saved I run hostname -F /etc/hostname to set the hostname.

  2. This step is optional. My Kubernetes cluster controller and work nodes all have private IP addresses. I've allocated the IP address block 192.168.102.1 - 192.168.102.253 to the cluster. This is not required if your systems have public IPs and resolvable hostnames. I did this so each of the systems in my cluster know how to properly communicate with each other. My hosts file looked like this:

    127.0.1.1 kubemaster.mydomain.com kubemaster
    192.168.103.1 kubemaster.mydomain.com
    192.168.103.2 kubenode1.mydomain.com
    192.168.103.3 kubenode2.mydomain.com
    192.168.103.4 kubenode3.mydomain.com
  3. Next set the system time to use your timezone (in my case it's America/Chicago) - dpkg-reconfigure tzdata.

  4. Run apt -y update && apt -y upgrade to update all the packages on the system.

  5. Next we're going to install k3s in agent mode:

    export K3S_URL=https://kubemaster.mydomain.com:6443
    export K3S_TOKEN=K104887s5p9182394ydc31c4988f6761844fe71e54ee93f6f64a76dsa87df800c86::server:39aef067sa87d8as9d6d7fb981db4
    curl -fsL https://get.k3s.io | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -s agent

    Make sure to set K3S_URL properly. You can use the hostname or IP address of your controller node here. Make sure you also set the K3S_TOKEN with the node-token you got from your controller node.

  6. Edit /etc/fstab and comment out the line for swap.

    # /swap.img none swap sw 0 0
  7. Reboot the server (reboot). Once rebooted, log back into the system and run free -mh to make sure it shows swap like so:

    Swap:    0B    0B    0B

    If it does, then remove the /swap.img file.

You'll repeat these 7 steps for each of your Kubernetes worker nodes.

Installing Rancher

This section is 100% optional. I like to have Rancher as it provides me a visual overview of my Kubernetes cluster as well as I can manage the cluster in Rancher as well. I should note that I have Traefik running which handles fetching and managing the TLS certificate to my Rancher instance.

  1. From the system which has access to your Kubernetes cluster via kubectl, create a new namespace in your cluster. This is a requirement of Rancher.

    kubectl create namespace cattle-system
  2. Next we'll need to install the Helm chart for the latest version(s) of Rancher:

    helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
  3. As noted before I already have Traefik setup and handling TLS certificates so this is the command I use to deploy Rancher to my Kubernetes cluster:

    helm install rancher rancher-latest/rancher \
        --namespace cattle-system \
        --set hostname=rancher.mydomain.com \
        --set tls=external

    This will deploy the latest version of Rancher to your Kubernetes cluster. You can run the following command to check the progress of the deployment:

    kubectl -n cattle-system rollout status deploy/rancher
  4. Once the deployment is complete you will likely need to assign an external IP to the Rancher service. Make sure you assign an IP to the service where Rancher is actually deployed. You can use a command like:

    kubectl patch svc rancher -p '{"spec":{"externalIPs":["192.168.103.10"]}}' -n cattle-system
  5. Now you should be able to visit https://rancher.yourdomain.com.

Installing Gitpod

Yay, we made it this far! Now we can get to installing Gitpod! For the following you need to be on a system that has access to your Kubernetes cluster (via kubectl), and that has Docker installed.

  1. First we're going to add labels to our Kubenetes worker nodes. These are required for Gitpod. The following command may be different for you, but you'll want to give the hostnames of all of your worker nodes in the for i in bit. For me this looked like:

    for i in kubenode1.mydomain.com kubenode2.mydomain.com kubenode3.mydomain.com ; do kubectl label node $i gitpod.io/workload_meta=true gitpod.io/workload_ide=true gitpod.io/workload_workspace_services=true gitpod.io/workload_workspace_regular=true gitpod.io/workload_workspace_headless=true ; done

    This will add all the necessary labels to your worker nodes in a single command.

  2. Next we need to build the pre-build installer. First visit werft and pick a build you want to run. For example gitpod-build-main.2071 - main.2071. We'll use the following commands to build the installer:

    docker create -ti --name installer eu.gcr.io/gitpod-core-dev/build/installer:main.2071
    docker cp installer:/app/installer ./installer
    docker rm -f installer

    Note that the build is there on the end of that first command.

  3. The next command will generate a base configuration file:

    ./installer init > gitpod.config.yaml

    From here you can open your gitpod.conf.yaml file to customize. At minimum make sure to set the domain: option to be your domain and update the workspace.runtime.containerdRuntimeDir and workspace.runtime.containerdSocket values.

    containerdRuntimeDir: /run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io
    containerdSocket: /run/k3s/containerd/containerd.sock

    I also setup the authProviders: section as I have a GitHub Enterprise instance. It should be noted the authProviders may be moved to a secret in the future.

    You can also setup external cluster dependencies such as an external Docker registry, database, and object storage. In order to use external dependencies you'll need to set the inCluster setting for them to true. For example if using an external database, the database section would look like:

    database:
      inCluster: false
      external:
        certificate:
          kind: secret
          name: database-token

    Your database-token secret needs to have the following key/pair values:

    • encryptionKeys - database encryption key
    • host - IP or URL of the database
    • password - database password
    • port - database port, usually 3306
    • username - database username

  4. Next we'll need to install cert-manager into our Kubernetes cluster. This is required even if you're providing your own TLS certificate for Gitpod. cert-manager will be used to generate certificates for internal Gitpod services.

    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    helm upgrade \
        --atomic \
        --cleanup-on-fail \
        --create-namespace \
        --install \
        --namespace='cert-manager' \
        --reset-values \
        --set installCRDs=true \
        --set 'extraArgs={--dns01-recursive-nameservers-only=true,--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}' \
        --wait \
        cert-manager \
        jetstack/cert-manager
  5. Next I create a new namespace in Kubernetes for Gitpod. You don't have to do this, Gitpod will use the namespace 'default' by default. I'm sort of an organizational freak so I prefer to keep everything Gitpod in its own namespace. The following command creates a new namespace, gitpod.

    kubectl create namespace gitpod
  6. Since I have my own TLS certificate for Gitpod I manually created the https-certificates secret in Kubernetes. If you're familiar with doing this via the command-line go for it! I used Rancher. Of note, you must ensure your TLS certificate contains domain.com, *.domain.com and *.ws.domain.com. If you'd like to create the 'https-certificates' secret via Rancher you may following these steps:

    First I bring up https://rancher.mydomain.com, and login as necessary. You should see a screen that has the following:

    Rancher Home

    Click on 'local'. From the sidebar on the left side click on 'Storage' and then 'Secrets'.

    Rancher > Storage > Secrets

    Then click on the blue 'Create' button.

    Rancher > Storage > Secrets > Create

    On the page that comes up with 4-5 boxes, click on the TLS Certificate box.

    Rancher > Storage > Secrets > Create > TLS Certificate

    On the next page ensure the Namespace is set to 'gitpod' and the Name is set to 'https-certificates'.

    Rancher > Storage > Secrets > Create > TLS Certificate

    Fill in the Private Key and Certificate files accordingly. I used a certificate from Let's Encrypt so I pasted in the contents of the fullchain.pem file into the Certificate field. Hit the blue 'Create' button when you're set.

  7. Next validate the Gitpod configuration file:

    ./installer validate config --config gitpod.config.yaml

    Hopefully everything looks good here and "valid": true is returned.

  8. Next we need to check that our Kubernetes cluster is setup properly for Gitpod:

    ./installer validate cluster --kubeconfig ~/.kube/config --config gitpod.config.yaml

    You may see any error about the https-certificates secret not being found, but that should be fine to ignore.

  9. Render the Gitpod YAML file. This is what will be used to deploy Gitpod to your Kubernetes cluster.

    ./installer render --config gitpod.config.yaml --namespace gitpod > gitpod.yaml
  10. TIME TO DEPLOY GITPOD!!

    kubectl apply -f gitpod.yaml

    You can watch as things get setup and deployed in Rancher or on the command-line you can run:

    watch -n5 kubectl get all -n gitpod

    or if you just want to watch the pods:

    watch -n5 kubectl get pods -n gitpod

    If everything is all good and happy, you should see all of the pods should show as Running.

  11. If you run kubectl get svc -n gitpod you may notice the proxy service doesn't have an External IP. This is normal since we don't have anything running to assign out external IPs at the moment.

    proxy    LoadBalancer    10.43.230.187    <none>    80:32673/TCP,443:32262/TCP,9500:32178/TCP    4d19h

Installing MetalLB

This step is 100% optional as well if you already have a service that assigns external IPs. If you don't and want something quick and easy then let's have a look at MetalLB.

Again from our system which has access to our Kubernetes cluster (via kubectl) we'll setup MetalLB.

  1. Install MetaLB via Manifests:

    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml

    This will install MetalLB under the metallb-system namespace.

  2. This next step we can do either via Rancher or on the command-line. I've done it via Rancher myself but the choice is yours. In Rancher I click on the 'Import YAML' button found in the upper right corner:

    Rancher > Import YAML

    Next I paste in the following as a template:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        address-pools:
        - name: default
          protocol: layer2
          addresses:
          - 192.168.1.240-192.168.1.250

    I changed the last line to be the IP range that I want MetalLB to hand out. In my case this is 192.168.102.3-192.168.102.253. Make sure to select 'metallb-system' from the 'Default Namespace' dropdown menu. Hit the blue 'Import' button when you're all set.

  3. I've also setup IP blocks on my worker nodes. I gave 192.168.102.10 - 192.168.102.19 to kubenode1.mydomain.com, 192.168.102.20 - 192.168.102.29 to kubenode2.mydomain.com, and 192.168.102.30 - 192.168.102.39 to kubenode3.mydomain.com. I set that up within their /etc/netplan/00-installer-config.yaml file.
  4. Back on the command-line if you run kubectl get svc -n gitpod you should see the proxy service now has an IP address:

    proxy    LoadBalancer    10.43.230.187    192.168.103.11    80:32673/TCP,443:32262/TCP,9500:32178/TCP    4d19h

Converting From MySQL 8.0 to MariaDB 10.5 on a cPanel & WHM Server

Over the last couple of days I've had to install cPanel & WHM a few times, each time forgetting to setup a /root/cpanel_profile/cpanel.config with my preferred database type/version. Thankfully since these were new installs of cPanel & WHM I had no databases in place so making the transition from the default install of MySQL to MariaDB 10.5 was easy.

First we need to remove MySQL 8.0. All of the following commands should be run as the root user.

dnf -y remove mysql-community-*

Next we need to remove the MySQL data directory. You can also rename it, but since there's nothing in there we need I opted to just remove it out right.

rm -rf /var/lib/mysql

Now we need to update the version of MySQL or in our case MariaDB we want on our server in the cPanel configuration file. You can either use your favorite command line editor to edit the /var/cpanel/cpanel.config file or use the following command.

sed -i 's/mysql-version=8.0/mysql-version=10.5/g' /var/cpanel/cpanel.config

Next we'll use a nifty API command to install MariaDB!

whmapi1 start_background_mysql_upgrade version=10.5

If you'd like to watch the progress of the install, you can run the following. Please note, the command won't be exactly as shown below because the timestamp on the log file (the filename of the log) will be different for everyone.

tail -f /var/cpanel/logs/mysql_upgrade.20211213-12345/unattended_background_upgrade.log

It took about 5 minutes or so for the install to complete on my system. Once this is done, you should be able to type in mysql -V and see output similar to:

mysql  Ver 15.1 Distrib 10.5.13-MariaDB, for Linux (x86_64) using readline 5.1

Resources

Using OpenVSCode Server on a Raspberry Pi 4

Currently OpenVSCode Server is released with support for x86_64 architectures only. Thankfully though it's quite easy to get it going on a Rasberry Pi (4b in my case).

  1. Login to your Rasberry Pi. I created a bin folder in my home directory to store Node.js, but anywhere should work.
  2. Download the Node.js binaries (wget https://nodejs.org/dist/v14.18.1/node-v14.18.1-linux-arm64.tar.xz). This fetches the latest LTS version.
  3. Uncompress the archive - tar xvf node-v14.18.1-linux-arm64.tar.xz.
  4. Next I updated $PATH so it would make calling node and npm easier. I did this by editing my ~/.bashrc file - export PATH="/home/jimmy/bin/node-v14.18.1-linux-arm64/bin:$PATH".
  5. Next I downloaded the latest OpenVSCode Server release - wget https://github.com/gitpod-io/openvscode-server/releases/download/openvscode-server-v1.61.0/openvscode-server-v1.61.0-linux-x64.tar.gz and uncompressed it - tar zxvf openvscode-server-v1.61.0-linux-x64.tar.gz`.
  6. You can either remove the node_modules folder inside the openvscode-server-v1.61.0-linux-x64 folder or just move it aside. I moved it aside - mv node_modules/ node_modules.old/.
  7. Now run npm install and it will reinstall the necessary Node.js modules. This may take a minute or two.
  8. Once the modules have been installed run - node ./out/server.js.

That's it! You should see something like:

[jimmy@my.raspberry.pi ~/openvscode-server/openvscode-server-v1.61.0-linux-x64]$ node ./out/server.js
[main 2021-10-22T19:50:06.927Z] Web UI available at http://localhost:3000

I don't know if this is the best way to get OpenVSCode Server running on a Raspberry Pi, but it was the quickest and easiest way I could find! Ideally the Gitpod team updates their GitHub workflow (assuming that's what they're using to create releases) to build a release for arm64, I can't imagine it would be too difficult.

How To Change the ID of a Proxmox VM

For whatever reason I find that seeing gaps in IDs between VMs annoying. I don't like seeing gaps such as:

VM100 - 100
VM101 - 101
VM103 - 103
VM104 - 104

Thankfully there's a bit of a way to adjust the IDs for VMs. I would recommend taking a backup of the VM and it's configuration file(s) beforehand. Once you're ready you can run the following command to display information about your logical volumes.

lvs -a

This should display something similar to:

  LV              VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0   data -wi-ao---- 120.00g                                                    
  vm-101-disk-0   data -wi-ao----  80.00g                                                    
  vm-101-disk-1   data -wi-ao---- 120.00g                                                    
  vm-102-disk-0   data -wi-a----- 120.00g                                                    
  vm-103-disk-0   data -wi-ao----  80.00g                                                    
  vm-104-disk-0   data -wi-ao----  80.00g                                                    
  vm-105-disk-0   data -wi-ao---- 320.00g                                                    
  vm-105-disk-1   data -wi-ao---- 120.00g                                                    
  vm-105-disk-2   data -wi-ao---- 120.00g                                                    
  vm-106-disk-0   data -wi-ao----  80.00g                                                    
  vm-107-disk-0   data -wi-ao---- 120.00g                                                    
  data            pve  twi-a-tz--  59.66g             0.00   1.59                            
  [data_tdata]    pve  Twi-ao----  59.66g                                                    
  [data_tmeta]    pve  ewi-ao----   1.00g                                                    
  [lvol0_pmspare] pve  ewi-------   1.00g                                                    
  root            pve  -wi-ao----  27.75g                                                    
  swap            pve  -wi-ao----   8.00g 

Next, determine which VM you're wanting to change the ID of. In the following commands I will be changing the VM ID of 101 to 100.

This command will update the name of the logical volume:

lvrename data/vm-101-disk-0 vm-100-disk-0

Next we want to update the ID in the VMs configuration file:

sed -i "s/101/100/g" /etc/pve/qemu-server/101.conf

After that we want to rename the VMs configuration file:

mv /etc/pve/qemu-server/101.conf /etc/pve/qemu-server/100.conf

Once those commands have been run you can start the VM up again.

Self-Hosting Dependabot via Docker

Dependabot is a really neat tool that helps keep your dependencies secure and up to date. It creates pull requests to your Git repositories with the updated dependencies. It works with a wide variety of package managers and languages like NPM/Yarn, Composer, Python, Ruby, Docker, Rust, and Go.

As someone who uses GitHub Enteprise, a little bit of extra work needs to be done in order to self-host Dependabot. After fiddling around with it for a few days, I've finally gotten it working, so I figured it would be worth writing up and sharing with everyone!

My setup consists of a server dedicated to running Docker containers, however any AMD64 system where Docker can run should do the trick. First I cloned the dependabot-script Git repository (I ran this in my /home/jimmy/Developer/github.com/dependabot directory - but you can put it wheverver you'd like):

git clone https://github.com/dependabot/dependabot-script.git

Next, I pulled the dependabot-core Docker image:

docker pull dependabot/dependabot-core

Once the Docker image has been pulled we need to run it to install some dependencies:

docker run -v "$(pwd):/home/dependabot/dependabot-script" -w /home/dependabot/dependabot-script dependabot/dependabot-core bundle install -j 3 --path vendor

Make sure you're in the cloned dependabot-script directory (/home/jimmy/Developer/github.com/dependabot/dependabot-script directory for me) when you run that. It shouldn't take very long to run.

Next we need to make a little change to fix an issue which seems to prevent Dependabot from running properly. So let's run this:

docker run -d -v "$(pwd):/home/dependabot/dependabot-script" \
    -w /home/dependabot/dependabot-script \
    dependabot/dependabot-core sleep 300

This will start up Dependabot as a detached container and it'll sleep for 300 seconds before exiting. This should give us enough time to run a couple commands. Once the above command has been run, use the following command to enter into the container:

docker ps |grep dependabot-core # get the id of the container

docker exec -it $containerId bash

You should now be inside your Dependabot container. I was able to find this issue on GitHub which allowed me to fix and run Dependabot without issue. We need to edit the Gemfile - which can be done while inside the container or outside, it's up to you. I initially did it from inside the container, but either works. Since nano wasn't available I had to install that first, I didn't check to see if vi or vim were but if they aren't you can use a similar approach. From within the container I ran:

apt -y update && apt -y install nano
nano Gemfile

I then edited:

gem "dependabot-omnibus", "~> 0.118.8"

to

gem "dependabot-omnibus", "~> 0.130.2"

Save and exit. Then run:

bundle _1.17.3_ install
bundle _1.17.3_ update

Once that was done, I exited the container and attempted to run Dependabot normally.

docker run --rm -v "$(pwd):/home/dependabot/dependabot-script" \
    -w /home/dependabot/dependabot-script \
    -e GITHUB_ACCESS_TOKEN=$GITHUB_ACCESS_TOKEN \
    -e GITHUB_ENTERPRISE_HOSTNAME=$GHE_HOSTNAME \
    -e GITHUB_ENTERPRISE_ACCESS_TOKEN=$GITHUB_ENTERPRISE_ACCESS_TOKEN \
    -e PROJECT_PATH=jimmybrancaccio/emil-scripts \
    -e PACKAGE_MANAGER=composer \
    dependabot/dependabot-core bundle exec ruby ./generic-update-script.rb

I recommend going to GitHub.com and setting up a personal access token (I only checked off the repo checkbox - but even that might not be needed). This allows you to make more requests to the GitHub.com API. Without this I ran into API rate-limiting quickly. If you do create a personal access token for GitHub.com replace $GITHUB_ACCESS_TOKEN with your token, otherwise just remove that whole line. Next you'll want to replace $GHE_HOSTNAME with your actual GitHub Enterprise hostname. You can either replace $GITHUB_ENTERPRISE_ACCESS_TOKEN with a personal access token from your GitHub Enterprise of your own account, or what I did was I created a separate account for Dependabot and generated a personal access token for that account. After that you just need to make sure PROJECT_PATH and PACKAGE_MANAGER have proper values.

I wrote a very simple Bash script with essentially a bunch of those Docker run "blocks". Once for each repository that I wanted Dependabot to monitor. I setup a cronjob for the script to run once a day as well. You can set that part of it up as you see fit though.

Resources

Installing kubernetes-cli via Homebrew on Apple Silicon

One of the final pieces of software I still hadn't been able to install on my new MacBook Pro M1 was kubectl also known as kubernetes-cli. Today I came across this issue on GitHub in which someone noted the architecture is just missing from one of the files and adding it in allows it to build properly. Using my limited knowledge of how Homebrew formula work, I was able to get it working.

First edit the formula for kubernetes-cli:

brew edit kubernetes-cli

Then at about line 25 add patch :DATA so it looks like:

  uses_from_macos "rsync" => :build

  patch :DATA

  def install
    # Don't dirty the git tree
    rm_rf ".brew_home"

Then go to the bottom of the file and add:

__END__
index bef1d837..154eecfd 100755
--- a/hack/lib/golang.sh
+++ b/hack/lib/golang.sh
@@ -49,6 +49,7 @@ readonly KUBE_SUPPORTED_CLIENT_PLATFORMS=(
   linux/s390x
   linux/ppc64le
   darwin/amd64
+  darwin/arm64
   windows/amd64
   windows/386
 )

Save and exit the file. The full file looks like this:

class KubernetesCli < Formula
  desc "Kubernetes command-line interface"
  homepage "https://kubernetes.io/"
  url "https://github.com/kubernetes/kubernetes.git",
      tag:      "v1.20.1",
      revision: "c4d752765b3bbac2237bf87cf0b1c2e307844666"
  license "Apache-2.0"
  head "https://github.com/kubernetes/kubernetes.git"

  livecheck do
    url :head
    regex(/^v([\d.]+)$/i)
  end

  bottle do
    cellar :any_skip_relocation
    sha256 "0b4f08bd1d47cb913d7cd4571e3394c6747dfbad7ff114c5589c8396c1085ecf" => :big_sur
    sha256 "f49639875a924ccbb15b5f36aa2ef48a2ed94ee67f72e7bd6fed22ae1186f977" => :catalina
    sha256 "4a3eaef3932d86024175fd6c53d3664e6674c3c93b1d4ceedd734366cce8e503" => :mojave
  end

  depends_on "go" => :build

  uses_from_macos "rsync" => :build
  patch :DATA
  def install
    # Don't dirty the git tree
    rm_rf ".brew_home"

    # Make binary
    system "make", "WHAT=cmd/kubectl"
    bin.install "_output/bin/kubectl"

    # Install bash completion
    output = Utils.safe_popen_read("#{bin}/kubectl", "completion", "bash")
    (bash_completion/"kubectl").write output

    # Install zsh completion
    output = Utils.safe_popen_read("#{bin}/kubectl", "completion", "zsh")
    (zsh_completion/"_kubectl").write output

    # Install man pages
    # Leave this step for the end as this dirties the git tree
    system "hack/generate-docs.sh"
    man1.install Dir["docs/man/man1/*.1"]
  end

  test do
    run_output = shell_output("#{bin}/kubectl 2>&1")
    assert_match "kubectl controls the Kubernetes cluster manager.", run_output

    version_output = shell_output("#{bin}/kubectl version --client 2>&1")
    assert_match "GitTreeState:\"clean\"", version_output
    if build.stable?
      assert_match stable.instance_variable_get(:@resource)
                         .instance_variable_get(:@specs)[:revision],
                   version_output
    end
  end
end
__END__
index bef1d837..154eecfd 100755
--- a/hack/lib/golang.sh
+++ b/hack/lib/golang.sh
@@ -49,6 +49,7 @@ readonly KUBE_SUPPORTED_CLIENT_PLATFORMS=(
   linux/s390x
   linux/ppc64le
   darwin/amd64
+  darwin/arm64
   windows/amd64
   windows/386
 )

Run this command to install kubernetes-cli:

brew install --build-from-source kubernetes-cli

Once completed you should be able to run the following command to get the version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-dirty", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"dirty", BuildDate:"2021-01-04T16:45:01Z", GoVersion:"go1.16beta1", Compiler:"gc", Platform:"darwin/arm64"}

You may get some further output about there being a connection issue, but that's okay if you haven't setup your Kubernetes configuration file yet.

Upgrading My 2009 MacPro 5,1 to macOS Big Sur 11.1

macOS Big Sur - About This Mac
I feel like I just barely updated my MacPro to macOS Catalina, and here I am getting it updated to macOS Big Sur!

Thankfully the process wasn't too bad. Of note, my MacPro was a 4,1 upgraded to 5,1 and I do not have any Bluetooth or WiFi cards.

Pre-Install Notes

  • Make sure you have or have previously run the APFS ROM Patcher.
  • At least 1 32GB+ USB thumbdrive - make sure it's a of decent quality / brand name.
  • A boot screen / boot picker.
  • SIP and authenticated root disabled.
  • Updated nvram boot-args.

It's worth noting I used 2x 16GB USB thumbdrives, but I've noted above to use a 32GB thumbdrive.

Disabling SIP and authenticated root

I figured it would be worth including this information so you don't have to dig through Google results. You'll need to either boot into recovery mode or a USB installer to do this. Either way, open Terminal and run these commands.

crsutil status # If this returns disabled you're good, move on.
csrutil authenticated-root status # If this returns disabled you're good, move on.

If either of the above commands didn't return disabled, then run the following:

crsutil disable
csrutil authenticated-root disable

You can re-run the first 2 commands to ensure the result is 'disabled'.

Update nvram boot-args

While you're also in Terminal run the following:

nvram boot-args="-v -no_compat_check"

Upgrading/Installing macOS Big Sur

Alright so first things first, we need to create a bootable USB installer. You should be able to do this all from your MacPro without needing to use any other systems, but it's possible you may need a secondary Mac.

Let's grab the tool we need to use to patch our macOS Big Sur installer. Visit this page on GitHub, click on the green button labeled 'Code'. Select the 'Download Zip' option and a zip file will download. A side-note, I have a separate adminstrator account on my MacBook Pro so I placed the unziped directory (named bigmac-master) into a directory accessible by all users - in this case I used /Users/Shared. You can put it wherever an administrative user can access it.

Next take your USB thumbdrive and erase it in Disk Utility. You can name it whatever you'd like, just make sure 'Scheme' is set to 'GUID Partition Map'. Once that has finished, you can close out of Disk Utility.

Open Terminal.app. Next go into the directory where you've placed the patcher tool. As an example:

cd /Users/Shared/bigmac-master

Now run the following command which will setup your bootable macOS Big Sur installer on your USB thumbdrive:

sudo ./bigmac.sh

You'll be asked (verbiage may differ slightly):

📦 Would you like to download Big Sur macOS 11.1 (20C69)? [y]:

Hit y and then Enter. This will download the macOS Big Sur Installer. It's about a 12GB so it may take a bit of time. You'll then be asked:

🍦 Would you like to create a USB Installer, excluding thumb drives [y]:

Don't worry about the 'excluding thumb drives' verbiage, but remember you should be using a thumbdrive of decent quality / brand named. Hit `y' and then Enter. It may take some time but it will do the following:

  • Create 3 partitions on the USB thumbdrive.
    • The first partition will be for a copy of the patcher tools.
    • The second partition will be for the macOS Big Sur installer.
    • The third partition will be free space.

Here's where I messed up as either I am blind or no recommendated size was given for the USB installer device so I figured my 16GB thumbdrive would be fine. It wasn't. I had to edit the bigmac.sh script. At line 131 I had to change:

diskutil partitionDisk "$disk" GPT jhfs+ bigmac_"$disk$number" 1g jhfs+ installer_"$disk$number" 16g jhfs+ FreeSpace 0

to

diskutil partitionDisk "$disk" GPT jhfs+ bigmac_"$disk$number" 1g jhfs+ installer_"$disk$number" 13.5g jhfs+ FreeSpace 0

I had to readjust the partition size that was being used for the macOS installer. Nonetheless once it completes you'll see some further instructions which you'll want to follow.

According to the previously provided output it states that you should reboot the system while holding down the Option key. This gets you into your boot selector. It's worth noting that I had to hold the Esc key to get to my boot selection screen. Whatever key you hold select the 'macOS Big Sur Installer' option. Once it's loaded up open up Terminal from the menu bar. Run the following commands to patch the installer:

cd /Volumes/bigmac
./preinstall.sh

Then close out Terminal so you're back at the window which has an 'Install macOS Big Sur' option. Click on that and go through the process. Since I was upgrading I selected 'Macintosh HD' as my disk. Continue on and it'll start installing/upgrading. During this process the system will reboot three times.

You may end up at the login screen once it has completed. If this is the case, reboot the system again into the boot selector. Select the 'macOS Big Sur Installer' option. Once back in the installer environment, open Terminal and run the following:

cd /Volumes/bigmac
./postinstall

Hopefully everything goes smoothly! However if you happen to see the following (or very similar) at the end of the script run, we'll need to create another USB key (or reuse your current key):

📸 Attempting to delete snapshot =>
diskutil apfs deleteSnapshot disk4s5 -uuid 0FFE862F-86C8-43AE-A1E0-DFFF7A6D7F79
Deleting APFS Snapshot 0FFE873F-86C8-43AE-A1E0-DFFF7A6D7F79 "com.apple.os.update-1F1A728CE24DEE376C4DA4FC78D1EDD1F3979DFCGD34C413688A5923AD2E3CD8" from APFS Volume disk4s5
Started APFS operation
Error: -69863: Insufficient privileges

If you see that, you won't be able to boot back into macOS Big Sur. You'll just get an endless stream of kernel panics and reboots. Thankfully there's another tool out there that can resolve this. As I couldn't boot my MacPro I had to swap over to my MacBook Pro. From there I downloaded a copy of this file to my /Users/Shared directory. I then wrote that image to my second USB thumbdrive which I had previously erased and named 'usb' and set the Scheme to GUID Partition Map.

sudo asr -source /Users/Shared/BigSurBaseSystemfix.dmg -erase -noverify -target /Volumes/usb

From there, you'll want to shutdown your MacPro, remove the first USB thumbdrive, insert the second USB thumbddrive and reboot into your boot selection screen. From there pick the 'macOS Big Sur Installer' option. Once booted into the installer, go to the menu bar and select 'Utilities', then select the 'BigSurFixes delete snapshot' option. A Terminal window will popup and you'll be asked a couple questions. To be honest I can't remember what the exact questions are and I can't find them in the tools Git repository, but they should be self explanatory. Once that has completed running. You can reboot the system. It should probably boot back into macOS Big Sur now!

Resources

Installing Go on Apple Silicon

Presently you're unable to install Go via Homebrew on the new M1 Mac systems. While it's expected to be working in the beginning of 2021 I personally couldn't wait that long as there's some tools I use on a daily basis that require Go. Thankfully there's a method you can follow to get Go installed on your M1 Mac for the time being.

First ensure that you have git installed and that you have a copy of the current go package. For the go package I downloaded this file to my Downloads directory. It basically acts as a bootstrap environment so we can build our own version of Go that is native to ARM64. Make sure you uncompress it. It should result in a new directory named 'go'.

Next we need to get a copy of the current Go source. We can do this by running:

mkdir ~/Developer && cd ~/Developer
git clone https://go.googlesource.com/go

Then navigate into the clone repository and checkout the master branch:

cd ~/Developer
cd go.googlesource.com/go
git checkout master

Next we need to compile a version of Go which will work on our M1 system. In the following command you'll want to adjust $USERNAME so it's your username.

arch --x86_64 env GOROOT_BOOTSTRAP=/Users/$USERNAME/Downloads/go GODEBUG=asyncpreemptoff=1 GOOS=darwin GOARCH=arm64 ./bootstrap.bash

I've moved the built binaries into my Homebrew installation but this isn't required. Don't forget to update $USERNAME to your username.

cd /opt/homebrew/Cellar && mkdir go && cd go && mkdir 1.15.6 && cd 1.15.6 && mkdir bin && mkdir libexec
cd bin && cp -v /Users/$USERNAME/Developer/go.googlesource.com/go-darwin-arm64-bootstrap/bin/*
cd ../libexec && cp -Rv /Users/$USERNAME/Developer/go.googlesource.com/go-darwin-arm64-bootstrap/* .
cd /opt/homebrew/bin
ln -s ../Cellar/go/1.15.6/bin/gofmt .
ln -s ../Cellar/go/1.15.6/bin/go .

I have this set in my .zshrc so it allows the binaries to work from Homebrew:

export PATH="/opt/homebrew/bin:/opt/homebrew/sbin:$PATH"

If everything worked, the following command should return the Go version (your output may be a bit different, specifically the commit version):

$ go version
go version devel +e508c1c67b Fri Dec 11 08:18:17 2020 +0000 darwin/arm64

This article was put together using my .zsh_history and memory so there's a chance something may not work 100%. If that's the case please don't hesitate to leave a comment and let me know. I probably should have written this right after I did this myself, oops! 🙄

Installing Rust via Homebrew on Apple Silicon

Apple has released new hardware which utilizes an ARM64 based chip. This means a lot of software provided by Homebrew doesn't work. A couple of these include Rust and Go (which I will cover installing in another post). Thankfully both of these vendors have updated their software to work with the new chip from Apple. The downside is that Homebrew itself isn't even supported in the new M1 environment and it requries a little extra command-line work. I suspect this should be much of an issue for users of Homebrew though! This document assumes you already have Homebrew installed.

First bring up Terminal.app or whatever terminal application you use. I'm using Terminal.app since it's native so I know for sure I am building within and using a M1/ARM64 native application. Of note I personally create 2 accounts on every Mac. The first user is an administrator cleverly named administrator, and then my second account is the account I use day to day and does not have administrator priviledges. So my first command on the command line is:

su administrator

From there I run another command to edit the formula for Rust:

brew edit rust

Around line 37-38 I add depends_on "ninja" => :build, so it's right after the line of depends_on "pkg-config". This was done after reading this comment on GitHub. Now save and exit the file.

Run the following command to build and install Rust:

brew install -s --HEAD rust

It took about 30 minutes to build on my MacBook Pro M1.

% rustc -V    
rustc 1.50.0-nightly (2225ee1b6 2020-12-11)

% cargo -V
cargo 1.50.0

The Unofficial Gitpod 0.5.0 Installation via Helm Chart Guide

Gitpod

Gitpod is a really neat tool that lets you work with your Git repositories in a web browser-based IDE. Gitpod is offered as a hosted solution or you can self host it. Of course self-hosting is the way to go! Unfortunately it's not as easy (at least right now) as most self-hosted apps to setup but this guide aims to walk you through getting a Gitpod instance setup for yourself.

This guide assumes you already have a Kubernetes cluster setup. I personally setup a cluster using k3s. I setup my custer with one master node (4 CPU cores, 4GBs of RAM and 40GBs of disk space) and 4 worker nodes (each with 8 CPU cores, 16GBs of RAM and 250GBs of disk space). This guide also assumes you're using an external MySQL database, external Docker registry and an external MinIO installation. I should also note that I am using GitHub Enterprise but this should work with GitHub.com and GitLab.

As someone who likes to keep things organized the first thing I did was create a project via Rancher called Gitpod. I also created a namespace, gitpod. I run the following command from my workstation where I've setup kubectl with my Kubernetes cluster configuration.

kubectl create namespace gitpod

You should get the following output:

namespace/gitpod created

Rancher Projects

I then added that namespace to the Gitpod project. Next we need to clone the Gitpod repository to our location workstation. You can put the repository wherever you'd like. I have mine in /home/jimmy/Developer/github.com/gitpod-io/gitpod.

git clone https://github.com/gitpod-io/gitpod

I use VS Code myself on my workstation, but use whatever you're most comfortable with. Open the new 'gitpod' folder in your editor. We need to setup our install!

Open the file charts/values.yaml. I recommend replacing the content of this file with this as this is what was recommended to me. Once replaced, save the file. Now we can start adjusting it and filling in our own information.

On line 4, change it to version: 0.5.0. Next adjust line 5 (hostname: localhost) to your domain name. This would be what you use in your web browser to access your instance of Gitpod.

version: 0.5.0
hostname: mydomain.com

We need to change the imagePrefix value as we're setting up a self-hosted installation. Adjust it as follows:

imagePrefix: eu.gcr.io/gitpod-io/self-hosted/

On line 5 (workspaceSizing), you can adjust your workspace settings. The only thing I adjusted was the limits, I set my memory limit to 4Gi. You can set this to whatever you feel comfortable with.

workspaceSizing:
  requests:
    cpu: "1m"
    memory: "2.25Gi"
    storage: "5Gi"
  limits:
    cpu: "5"
    memory: "4Gi"

Next on line 51 (db) you'll want to fill in your database information. You can use a hostname or IP address here for host.

db:
  host: db.yourdomain.com
  port: 3306
  password: password1234

Next open the secrets/encryption-key.json file and create yourself a new key. I am not sure if this is required but I figured it would be better to set something rather than what is in there just in case. I used this website to generate a string.

Next configure the authProviders block. I am not sure if you can have both GitHub and GitLab at the same time, or have both GitHub and a GitHub Enterprise configurations, you're more than welcome to try it out. However I have GitHub Enterprise so I create an OAuth app and filled out the details. It looks something like this:

authProviders:
  - id: "GitHub-Enterprise"
  host: "githubenterprise.com"
  type: "GitHub"
  oauth:
    clientId: "6g5a657e145y51abc2ff"
    clientSecret: "9819537b4694ee6a46312t2dalw17345f8d5hgt"
    callBackUrl: "https://mydomain.com/auth/github/callback"
    settingsUrl: "https://githubenterprise.com/settings/connections/applications/6g5a657e145y51abc2ff"
  description: "GitHub Enterprise"
  icon: ""

In the branding block I updated each instance of gitpod.io to my domain. Feel free to do the same but it's not required as far as I know.

I updated the serverProxyApiKey with a new string for the same reason as I updated the one in the secrets/encryption-key.json file.

Next we'll update some of the settings in the components section. First up is imageBuilder. Since we have our own registry we need to update the registry block to reflect that. Here's what mine looks like:

imageBuilder:
  name: "image-builder"
  dependsOn:
    - "image-builder-configmap.yaml"
    hostDindData: /var/gitpod/docker
    registryCerts: []
    registry:
    name: registry.mydomain.com
      secretName: image-builder-registry-secret
      path: ""
      baseImageName: ""
      workspaceImageName: ""
      # By default, the builtin registry is accessed through the proxy.
      # If bypassProxy is true, the builtin registry is accessed via <registry-name>.<namespace>.svc.cluster.local directly.
      bypassProxy: false
    dindImage: docker:18.06-dind
    dindResources:
      requests:
        cpu: 100m
        memory: 128Mi
    ports:
      rpc:
        expose: true
        containerPort: 8080
      metrics:
        expose: false
        containerPort: 9500

Under workspace make sure to set the secretName of pullSecret to image-builder-registry-secret:

pullSecret:
  secretName: image-builder-registry-secret

Next under wsSync you can setup the remoteStorage details however it may be some what pointless due to a bug in one of the templates. I'll show you how to get MinIO working after we've deployed the Helm chart. I did fill out the information so once the bug is resolved I already have the settings filled out.

Scroll down to the bottom of the page, you should see sections for docker-registry, minio and mysql. Edit them or replace them so it looks like this:

docker-registry:
  enabled: false

minio:
  enabled: false

mysql:
  enabled: false

Now save your values.yaml file. Next we need to create a secret for your Docker registry.

kubectl create secret docker-registry image-builder-registry-secret --docker-server=registry.mydomain --docker-username=$USERNAME --docker-password=$PASSWORD -n gitpod

Make sure to put the URL of your registry for --docker-server and replace $USERNAME and $PASSWORD with your username and password. Once that is done you should see it on the Registry Credentials tab of the Secrets page within Rancher.

Rancher - Registry Credentials

This next step I am not sure if it's necessary, but I found that if I didn't do it, I had issues. So log into your MySQL server and run these queries:

CREATE USER IF NOT EXISTS "gitpod"@"%" IDENTIFIED BY "$PASSWORD";
GRANT ALL ON `gitpod%`.* TO "gitpod"@"%";

CREATE DATABASE IF NOT EXISTS `gitpod-sessions` CHARSET utf8mb4;
USE `gitpod-sessions`;

CREATE TABLE IF NOT EXISTS sessions (
   `session_id` varchar(128) COLLATE utf8mb4_bin NOT NULL,
   `expires` int(11) unsigned NOT NULL,
   `data` text COLLATE utf8mb4_bin,
   `_lastModified` timestamp(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),
   PRIMARY KEY (`session_id`)
);

CREATE DATABASE gitpod CHARSET utf8mb4;

This creates a MySQL user, 'gitpod' (don't forget to update $PASSWORD in the query with your own password), the gitpod-sessions database with a sessions table inside of it and the gitpod database.

Next we need to create 2 repositories (workspace-images and base-images) within our Docker registry. The only way I could figure out how to do this was to push an image to the registry. I just used something small though I plan on deleting it later so I suppose that doesn't matter. I did using these commands:

docker push registry.mydomain.com/workspace-images/docker-whale:latest
docker push registry.mydomain.com/base-images/docker-whale:latest

Now you should be all set to deloy! First lets add the Gitpod Helm charts repository:

helm repo add gitpod https://charts.gitpod.io
helm dep update

Next lets install Gitpod!

helm upgrade --install gitpod gitpod/gitpod --timeout 60m --values values.yaml -n gitpod

You should see something like this:

Release "gitpod" does not exist. Installing it now.
NAME: gitpod
LAST DEPLOYED: Thu Dec  3 10:43:45 2020
NAMESPACE: gitpod
STATUS: deployed
REVISION: 1
TEST SUITE: None

You can watch each of the workloads come up in Rancher if you'd like. Hopefully everything is green!

Rancher - Gitpod Workloads

Now we've got to do a little fixing of certain things due to bugs with Gitpod. First if you have a multi-worker node cluster we need to fix ws-sync. You can do this however you'd like but I find doing it from within Rancher the easiest. In the row for ws-sync click on the little blue button with 3 dots and click on 'View/Edit YML'. Around line 350 or so we need to change the dnsPolicy and add hostNetwork. Adjust it so it reads as:

      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true

Save it and this will automatically trigger the workload to redeploy. This will help prevent getting the following when trying to load a workspace:

 cannot initialize workspace: cannot connect to ws-sync: cannot connect to ws-sync: cannot connect to workspace sync; last backup failed: cannot connect to ws-sync: cannot connect to workspace sync.

Next we need to fix the MinIO settings in the server workload. Similar to how we edited the YAML for ws-sync we need to do the same for server. Click the little blue button with 3 dots and click on 'View/Edit YAML'. Locate the following which should just have defaults in the value lines:

        - name: MINIO_END_POINT
          value: minio.minio.svc.cluster.local
        - name: MINIO_PORT
          value: "9000"
        - name: MINIO_ACCESS_KEY
          value: accesskey
        - name: MINIO_SECRET_KEY
          value: secretkey

You may only need to update the values for MINIO_ACCESS_KEY and MINIO_SECRET_KEY. I believe I needed to update the value for MINIO_END_POINT as well as it seemed to have the port tacked onto the end which should be removed. Once everything looks good, hit Save and the server workload will redeploy.

At this point you should be all set. Visit https://yourdomain.com which should redirect you to https://yourdomain.com/workspaces/. You can login from there. Once you do it's just a matter of creating new workspaces. This can be done by constructing a URL like https://yourdomain.com/#https://github.com/username/your-repository. If all went well you should see a code editor in your web browser with your Git repository contents!

Other Notes

  • At the time of writing (December 3, 2020) there still appears to be an issue with uploading extensions. I have a thread on the Gitpod community forums for this. Uploading extensions has actually never worked for me in all the time I've been using Gitpod which appears to have been since June of this year.
  • There appears to be an issue with installing extensions from search results. I just noticed this today after someone else posted about it in the Gitpod community forums.
  • I have my Kubernetes cluster sitting behind Traefik which provides Gitpod with SSL certs.

Resources

The New MacBook Pro M1 - 1 Week In

MacBook Pro M1

As someone who is addicted to playing with new technology it's no surprise I picked up a new MacBook Pro M1 last Friday. I wanted to give it some time before I wrote about it so I figured ~1 week was enough time to form a decent opinion.

The magic started as soon as I took it out of the box and lifted the lid. It automatically booted up! That was really cool and I wish I could understand how it did that.

I had been pondering getting the new laptop with the new M1 chip, 8GBs of RAM and a 512GB SSD for a few days. My current MacBook Pro has 16GBs of RAM and struggles to keep up with me from time to time. However in the end, I figured I had some time to return the new laptop should I have any issues and re-order a new one with 16GBs of RAM. This was my biggest concern in getting the new MacBook Pro. However after one week of pushing it with all my work, it hasn't skipped a beat. It has handled everything I've thrown at it with no problems at all. I have Safari running with ~25 tabs, Hyper.js, Screen Sharing, Mail, Things, multiple VS Code workspaces, Messages, Things, Discord, Mastonaut, 1Password, Terminal, and Transmit open. I've been compiling things in Homebrew, running PHP unit tests, and it's been perfectly fine. No slow downs, no spinning rainbow wheel. The only time when I ran into an issue was when I managed to get some errant PHP processes. There was about 10 of them running using a ton of CPU usage and causing slow down for me. This only happened once though.

The new SoC design seems to really work well and meets and probably exceeds my expectations. It definitely allows things to run and access resources very quickly!

Another concern of mine was the keyboard. The reason why I am still using a MacBook Pro from early 2015 is because I love the keyboard. Since the recent redesigns that have utilized the butterfly mechanism I couldn't stand the keyboards. They were absolutely horrific. However the new resdesign which appears to be in the iPad Pro keyboard and the newer laptops is MUCH better. I had ended up going to Best Buy around 2 weeks ago and tried out the keyboard on one of the newer Intel MacBook Pros and was much more impressed. As much as I am not a huge fan of Best Buy they're still open which allowed me the opportunity to try out the newer keyboard. Apple stores are still closed here - you can only make appointments for picking up items, you can't go into the stores as before to try things out unfortunately (DAMN COVID). I've been typing away constantly on my new MacBook Pros keyboard, I love it. It's smooth and dare I say, soft! There's travel between pressing the key and it bottoming out. It feels much more comfortable and responsive then the previous design.

Similar to Apple's transistion to Intel this transistion also provides a "compatibility layer" Rosetta 2, which allows you to run x86 applications on the new M1 chip (ARM64). So far I haven't had a single issue running any of my applications. There's probably 2-3 apps I use currently that use Rosetta 2. I haven't noticed any slow down within the apps, and in fact they seem to start up with the same speed as native apps. I believe the only non-native apps I use on a day to day are Hyper.js, 1Password and VS Code. I do know that 1Password and VS Code are working on building native apps, though I am not sure about Hyper.js. I would think they should be though, and I can't imagine it would be difficult to update. I believe it runs on Electron.js so they just need to swap in an ARM64 build of that and perhaps a few other tweaks.

I'm not sure how much more I have to say about this laptop. It's an excellent upgrade from my early 2015 MacBook Pro (which I still have to use for work 😭). Even with 8GBs of RAM, I've had no issues! It's light in weight (so I find myself taking it everywhere), the keyboard is pleasent to use and all applications I use have had no issues thus far. The only real trouble I've come across is with certain applications or libraries failing to install via Homebrew. This is generally due to them not being compatible with macOS Big Sur or ARM64 yet. The Homebrew team has been working hard to ensure compatibility with both though. Overall, I love this new laptop!