For whatever reason I find that seeing gaps in IDs between VMs annoying. I don't like seeing gaps such as:

VM100 - 100
VM101 - 101
VM103 - 103
VM104 - 104

Thankfully there's a bit of a way to adjust the IDs for VMs. I would recommend taking a backup of the VM and it's configuration file(s) beforehand. Once you're ready you can run the following command to display information about your logical volumes.

lvs -a

This should display something similar to:

  LV              VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0   data -wi-ao---- 120.00g                                                    
  vm-101-disk-0   data -wi-ao----  80.00g                                                    
  vm-101-disk-1   data -wi-ao---- 120.00g                                                    
  vm-102-disk-0   data -wi-a----- 120.00g                                                    
  vm-103-disk-0   data -wi-ao----  80.00g                                                    
  vm-104-disk-0   data -wi-ao----  80.00g                                                    
  vm-105-disk-0   data -wi-ao---- 320.00g                                                    
  vm-105-disk-1   data -wi-ao---- 120.00g                                                    
  vm-105-disk-2   data -wi-ao---- 120.00g                                                    
  vm-106-disk-0   data -wi-ao----  80.00g                                                    
  vm-107-disk-0   data -wi-ao---- 120.00g                                                    
  data            pve  twi-a-tz--  59.66g             0.00   1.59                            
  [data_tdata]    pve  Twi-ao----  59.66g                                                    
  [data_tmeta]    pve  ewi-ao----   1.00g                                                    
  [lvol0_pmspare] pve  ewi-------   1.00g                                                    
  root            pve  -wi-ao----  27.75g                                                    
  swap            pve  -wi-ao----   8.00g 

Next, determine which VM you're wanting to change the ID of. In the following commands I will be changing the VM ID of 101 to 100.

This command will update the name of the logical volume:

lvrename data/vm-101-disk-0 vm-100-disk-0

Next we want to update the ID in the VMs configuration file:

sed -i "s/101/100/g" /etc/pve/qemu-server/101.conf

After that we want to rename the VMs configuration file:

mv /etc/pve/qemu-server/101.conf /etc/pve/qemu-server/100.conf

Once those commands have been run you can start the VM up again.

Dependabot is a really neat tool that helps keep your dependencies secure and up to date. It creates pull requests to your Git repositories with the updated dependencies. It works with a wide variety of package managers and languages like NPM/Yarn, Composer, Python, Ruby, Docker, Rust, and Go.

As someone who uses GitHub Enteprise, a little bit of extra work needs to be done in order to self-host Dependabot. After fiddling around with it for a few days, I've finally gotten it working, so I figured it would be worth writing up and sharing with everyone!

My setup consists of a server dedicated to running Docker containers, however any AMD64 system where Docker can run should do the trick. First I cloned the dependabot-script Git repository (I ran this in my /home/jimmy/Developer/github.com/dependabot directory - but you can put it wheverver you'd like):

git clone https://github.com/dependabot/dependabot-script.git

Next, I pulled the dependabot-core Docker image:

docker pull dependabot/dependabot-core

Once the Docker image has been pulled we need to run it to install some dependencies:

docker run -v "$(pwd):/home/dependabot/dependabot-script" -w /home/dependabot/dependabot-script dependabot/dependabot-core bundle install -j 3 --path vendor

Make sure you're in the cloned dependabot-script directory (/home/jimmy/Developer/github.com/dependabot/dependabot-script directory for me) when you run that. It shouldn't take very long to run.

Next we need to make a little change to fix an issue which seems to prevent Dependabot from running properly. So let's run this:

docker run -d -v "$(pwd):/home/dependabot/dependabot-script" \
    -w /home/dependabot/dependabot-script \
    dependabot/dependabot-core sleep 300

This will start up Dependabot as a detached container and it'll sleep for 300 seconds before exiting. This should give us enough time to run a couple commands. Once the above command has been run, use the following command to enter into the container:

docker ps |grep dependabot-core # get the id of the container

docker exec -it $containerId bash

You should now be inside your Dependabot container. I was able to find this issue on GitHub which allowed me to fix and run Dependabot without issue. We need to edit the Gemfile - which can be done while inside the container or outside, it's up to you. I initially did it from inside the container, but either works. Since nano wasn't available I had to install that first, I didn't check to see if vi or vim were but if they aren't you can use a similar approach. From within the container I ran:

apt -y update && apt -y install nano
nano Gemfile

I then edited:

gem "dependabot-omnibus", "~> 0.118.8"

to

gem "dependabot-omnibus", "~> 0.130.2"

Save and exit. Then run:

bundle _1.17.3_ install
bundle _1.17.3_ update

Once that was done, I exited the container and attempted to run Dependabot normally.

docker run --rm -v "$(pwd):/home/dependabot/dependabot-script" \
    -w /home/dependabot/dependabot-script \
    -e GITHUB_ACCESS_TOKEN=$GITHUB_ACCESS_TOKEN \
    -e GITHUB_ENTERPRISE_HOSTNAME=$GHE_HOSTNAME \
    -e GITHUB_ENTERPRISE_ACCESS_TOKEN=$GITHUB_ENTERPRISE_ACCESS_TOKEN \
    -e PROJECT_PATH=jimmybrancaccio/emil-scripts \
    -e PACKAGE_MANAGER=composer \
    dependabot/dependabot-core bundle exec ruby ./generic-update-script.rb

I recommend going to GitHub.com and setting up a personal access token (I only checked off the repo checkbox - but even that might not be needed). This allows you to make more requests to the GitHub.com API. Without this I ran into API rate-limiting quickly. If you do create a personal access token for GitHub.com replace $GITHUB_ACCESS_TOKEN with your token, otherwise just remove that whole line. Next you'll want to replace $GHE_HOSTNAME with your actual GitHub Enterprise hostname. You can either replace $GITHUB_ENTERPRISE_ACCESS_TOKEN with a personal access token from your GitHub Enterprise of your own account, or what I did was I created a separate account for Dependabot and generated a personal access token for that account. After that you just need to make sure PROJECT_PATH and PACKAGE_MANAGER have proper values.

I wrote a very simple Bash script with essentially a bunch of those Docker run "blocks". Once for each repository that I wanted Dependabot to monitor. I setup a cronjob for the script to run once a day as well. You can set that part of it up as you see fit though.

Resources

One of the final pieces of software I still hadn't been able to install on my new MacBook Pro M1 was kubectl also known as kubernetes-cli. Today I came across this issue on GitHub in which someone noted the architecture is just missing from one of the files and adding it in allows it to build properly. Using my limited knowledge of how Homebrew formula work, I was able to get it working.

First edit the formula for kubernetes-cli:

brew edit kubernetes-cli

Then at about line 25 add patch :DATA so it looks like:

  uses_from_macos "rsync" => :build

  patch :DATA

  def install
    # Don't dirty the git tree
    rm_rf ".brew_home"

Then go to the bottom of the file and add:

__END__
index bef1d837..154eecfd 100755
--- a/hack/lib/golang.sh
+++ b/hack/lib/golang.sh
@@ -49,6 +49,7 @@ readonly KUBE_SUPPORTED_CLIENT_PLATFORMS=(
   linux/s390x
   linux/ppc64le
   darwin/amd64
+  darwin/arm64
   windows/amd64
   windows/386
 )

Save and exit the file. The full file looks like this:

class KubernetesCli < Formula
  desc "Kubernetes command-line interface"
  homepage "https://kubernetes.io/"
  url "https://github.com/kubernetes/kubernetes.git",
      tag:      "v1.20.1",
      revision: "c4d752765b3bbac2237bf87cf0b1c2e307844666"
  license "Apache-2.0"
  head "https://github.com/kubernetes/kubernetes.git"

  livecheck do
    url :head
    regex(/^v([\d.]+)$/i)
  end

  bottle do
    cellar :any_skip_relocation
    sha256 "0b4f08bd1d47cb913d7cd4571e3394c6747dfbad7ff114c5589c8396c1085ecf" => :big_sur
    sha256 "f49639875a924ccbb15b5f36aa2ef48a2ed94ee67f72e7bd6fed22ae1186f977" => :catalina
    sha256 "4a3eaef3932d86024175fd6c53d3664e6674c3c93b1d4ceedd734366cce8e503" => :mojave
  end

  depends_on "go" => :build

  uses_from_macos "rsync" => :build
  patch :DATA
  def install
    # Don't dirty the git tree
    rm_rf ".brew_home"

    # Make binary
    system "make", "WHAT=cmd/kubectl"
    bin.install "_output/bin/kubectl"

    # Install bash completion
    output = Utils.safe_popen_read("#{bin}/kubectl", "completion", "bash")
    (bash_completion/"kubectl").write output

    # Install zsh completion
    output = Utils.safe_popen_read("#{bin}/kubectl", "completion", "zsh")
    (zsh_completion/"_kubectl").write output

    # Install man pages
    # Leave this step for the end as this dirties the git tree
    system "hack/generate-docs.sh"
    man1.install Dir["docs/man/man1/*.1"]
  end

  test do
    run_output = shell_output("#{bin}/kubectl 2>&1")
    assert_match "kubectl controls the Kubernetes cluster manager.", run_output

    version_output = shell_output("#{bin}/kubectl version --client 2>&1")
    assert_match "GitTreeState:\"clean\"", version_output
    if build.stable?
      assert_match stable.instance_variable_get(:@resource)
                         .instance_variable_get(:@specs)[:revision],
                   version_output
    end
  end
end
__END__
index bef1d837..154eecfd 100755
--- a/hack/lib/golang.sh
+++ b/hack/lib/golang.sh
@@ -49,6 +49,7 @@ readonly KUBE_SUPPORTED_CLIENT_PLATFORMS=(
   linux/s390x
   linux/ppc64le
   darwin/amd64
+  darwin/arm64
   windows/amd64
   windows/386
 )

Run this command to install kubernetes-cli:

brew install --build-from-source kubernetes-cli

Once completed you should be able to run the following command to get the version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.1-dirty", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"dirty", BuildDate:"2021-01-04T16:45:01Z", GoVersion:"go1.16beta1", Compiler:"gc", Platform:"darwin/arm64"}

You may get some further output about there being a connection issue, but that's okay if you haven't setup your Kubernetes configuration file yet.

macOS Big Sur - About This Mac I feel like I just barely updated my MacPro to macOS Catalina, and here I am getting it updated to macOS Big Sur!

Thankfully the process wasn't too bad. Of note, my MacPro was a 4,1 upgraded to 5,1 and I do not have any Bluetooth or WiFi cards.

Pre-Install Notes

  • Make sure you have or have previously run the APFS ROM Patcher.
  • At least 1 32GB+ USB thumbdrive - make sure it's a of decent quality / brand name.
  • A boot screen / boot picker.
  • SIP and authenticated root disabled.
  • Updated nvram boot-args.

It's worth noting I used 2x 16GB USB thumbdrives, but I've noted above to use a 32GB thumbdrive.

Disabling SIP and authenticated root

I figured it would be worth including this information so you don't have to dig through Google results. You'll need to either boot into recovery mode or a USB installer to do this. Either way, open Terminal and run these commands.

crsutil status # If this returns disabled you're good, move on.
csrutil authenticated-root status # If this returns disabled you're good, move on.

If either of the above commands didn't return disabled, then run the following:

crsutil disable
csrutil authenticated-root disable

You can re-run the first 2 commands to ensure the result is 'disabled'.

Update nvram boot-args

While you're also in Terminal run the following:

nvram boot-args="-v -no_compat_check"

Upgrading/Installing macOS Big Sur

Alright so first things first, we need to create a bootable USB installer. You should be able to do this all from your MacPro without needing to use any other systems, but it's possible you may need a secondary Mac.

Let's grab the tool we need to use to patch our macOS Big Sur installer. Visit this page on GitHub, click on the green button labeled 'Code'. Select the 'Download Zip' option and a zip file will download. A side-note, I have a separate adminstrator account on my MacBook Pro so I placed the unziped directory (named bigmac-master) into a directory accessible by all users - in this case I used /Users/Shared. You can put it wherever an administrative user can access it.

Next take your USB thumbdrive and erase it in Disk Utility. You can name it whatever you'd like, just make sure 'Scheme' is set to 'GUID Partition Map'. Once that has finished, you can close out of Disk Utility.

Open Terminal.app. Next go into the directory where you've placed the patcher tool. As an example:

cd /Users/Shared/bigmac-master

Now run the following command which will setup your bootable macOS Big Sur installer on your USB thumbdrive:

sudo ./bigmac.sh

You'll be asked (verbiage may differ slightly):

📦 Would you like to download Big Sur macOS 11.1 (20C69)? [y]:

Hit y and then Enter. This will download the macOS Big Sur Installer. It's about a 12GB so it may take a bit of time. You'll then be asked:

🍦 Would you like to create a USB Installer, excluding thumb drives [y]:

Don't worry about the 'excluding thumb drives' verbiage, but remember you should be using a thumbdrive of decent quality / brand named. Hit `y' and then Enter. It may take some time but it will do the following:

  • Create 3 partitions on the USB thumbdrive.
    • The first partition will be for a copy of the patcher tools.
    • The second partition will be for the macOS Big Sur installer.
    • The third partition will be free space.

Here's where I messed up as either I am blind or no recommendated size was given for the USB installer device so I figured my 16GB thumbdrive would be fine. It wasn't. I had to edit the bigmac.sh script. At line 131 I had to change:

diskutil partitionDisk "$disk" GPT jhfs+ bigmac_"$disk$number" 1g jhfs+ installer_"$disk$number" 16g jhfs+ FreeSpace 0

to

diskutil partitionDisk "$disk" GPT jhfs+ bigmac_"$disk$number" 1g jhfs+ installer_"$disk$number" 13.5g jhfs+ FreeSpace 0

I had to readjust the partition size that was being used for the macOS installer. Nonetheless once it completes you'll see some further instructions which you'll want to follow.

According to the previously provided output it states that you should reboot the system while holding down the Option key. This gets you into your boot selector. It's worth noting that I had to hold the Esc key to get to my boot selection screen. Whatever key you hold select the 'macOS Big Sur Installer' option. Once it's loaded up open up Terminal from the menu bar. Run the following commands to patch the installer:

cd /Volumes/bigmac
./preinstall.sh

Then close out Terminal so you're back at the window which has an 'Install macOS Big Sur' option. Click on that and go through the process. Since I was upgrading I selected 'Macintosh HD' as my disk. Continue on and it'll start installing/upgrading. During this process the system will reboot three times.

You may end up at the login screen once it has completed. If this is the case, reboot the system again into the boot selector. Select the 'macOS Big Sur Installer' option. Once back in the installer environment, open Terminal and run the following:

cd /Volumes/bigmac
./postinstall

Hopefully everything goes smoothly! However if you happen to see the following (or very similar) at the end of the script run, we'll need to create another USB key (or reuse your current key):

📸 Attempting to delete snapshot =>
diskutil apfs deleteSnapshot disk4s5 -uuid 0FFE862F-86C8-43AE-A1E0-DFFF7A6D7F79
Deleting APFS Snapshot 0FFE873F-86C8-43AE-A1E0-DFFF7A6D7F79 "com.apple.os.update-1F1A728CE24DEE376C4DA4FC78D1EDD1F3979DFCGD34C413688A5923AD2E3CD8" from APFS Volume disk4s5
Started APFS operation
Error: -69863: Insufficient privileges

If you see that, you won't be able to boot back into macOS Big Sur. You'll just get an endless stream of kernel panics and reboots. Thankfully there's another tool out there that can resolve this. As I couldn't boot my MacPro I had to swap over to my MacBook Pro. From there I downloaded a copy of this file to my /Users/Shared directory. I then wrote that image to my second USB thumbdrive which I had previously erased and named 'usb' and set the Scheme to GUID Partition Map.

sudo asr -source /Users/Shared/BigSurBaseSystemfix.dmg -erase -noverify -target /Volumes/usb

From there, you'll want to shutdown your MacPro, remove the first USB thumbdrive, insert the second USB thumbddrive and reboot into your boot selection screen. From there pick the 'macOS Big Sur Installer' option. Once booted into the installer, go to the menu bar and select 'Utilities', then select the 'BigSurFixes delete snapshot' option. A Terminal window will popup and you'll be asked a couple questions. To be honest I can't remember what the exact questions are and I can't find them in the tools Git repository, but they should be self explanatory. Once that has completed running. You can reboot the system. It should probably boot back into macOS Big Sur now!

Resources

Presently you're unable to install Go via Homebrew on the new M1 Mac systems. While it's expected to be working in the beginning of 2021 I personally couldn't wait that long as there's some tools I use on a daily basis that require Go. Thankfully there's a method you can follow to get Go installed on your M1 Mac for the time being.

First ensure that you have git installed and that you have a copy of the current go package. For the go package I downloaded this file to my Downloads directory. It basically acts as a bootstrap environment so we can build our own version of Go that is native to ARM64. Make sure you uncompress it. It should result in a new directory named 'go'.

Next we need to get a copy of the current Go source. We can do this by running:

mkdir ~/Developer && cd ~/Developer
git clone https://go.googlesource.com/go

Then navigate into the clone repository and checkout the master branch:

cd ~/Developer
cd go.googlesource.com/go
git checkout master

Next we need to compile a version of Go which will work on our M1 system. In the following command you'll want to adjust $USERNAME so it's your username.

arch --x86_64 env GOROOT_BOOTSTRAP=/Users/$USERNAME/Downloads/go GODEBUG=asyncpreemptoff=1 GOOS=darwin GOARCH=arm64 ./bootstrap.bash

I've moved the built binaries into my Homebrew installation but this isn't required. Don't forget to update $USERNAME to your username.

cd /opt/homebrew/Cellar && mkdir go && cd go && mkdir 1.15.6 && cd 1.15.6 && mkdir bin && mkdir libexec
cd bin && cp -v /Users/$USERNAME/Developer/go.googlesource.com/go-darwin-arm64-bootstrap/bin/*
cd ../libexec && cp -Rv /Users/$USERNAME/Developer/go.googlesource.com/go-darwin-arm64-bootstrap/* .
cd /opt/homebrew/bin
ln -s ../Cellar/go/1.15.6/bin/gofmt .
ln -s ../Cellar/go/1.15.6/bin/go .

I have this set in my .zshrc so it allows the binaries to work from Homebrew:

export PATH="/opt/homebrew/bin:/opt/homebrew/sbin:$PATH"

If everything worked, the following command should return the Go version (your output may be a bit different, specifically the commit version):

$ go version
go version devel +e508c1c67b Fri Dec 11 08:18:17 2020 +0000 darwin/arm64

This article was put together using my .zsh_history and memory so there's a chance something may not work 100%. If that's the case please don't hesitate to leave a comment and let me know. I probably should have written this right after I did this myself, oops! 🙄

Apple has released new hardware which utilizes an ARM64 based chip. This means a lot of software provided by Homebrew doesn't work. A couple of these include Rust and Go (which I will cover installing in another post). Thankfully both of these vendors have updated their software to work with the new chip from Apple. The downside is that Homebrew itself isn't even supported in the new M1 environment and it requries a little extra command-line work. I suspect this should be much of an issue for users of Homebrew though! This document assumes you already have Homebrew installed.

First bring up Terminal.app or whatever terminal application you use. I'm using Terminal.app since it's native so I know for sure I am building within and using a M1/ARM64 native application. Of note I personally create 2 accounts on every Mac. The first user is an administrator cleverly named administrator, and then my second account is the account I use day to day and does not have administrator priviledges. So my first command on the command line is:

su administrator

From there I run another command to edit the formula for Rust:

brew edit rust

Around line 37-38 I add depends_on "ninja" => :build, so it's right after the line of depends_on "pkg-config". This was done after reading this comment on GitHub. Now save and exit the file.

Run the following command to build and install Rust:

brew install -s --HEAD rust

It took about 30 minutes to build on my MacBook Pro M1.

% rustc -V    
rustc 1.50.0-nightly (2225ee1b6 2020-12-11)

% cargo -V
cargo 1.50.0

Gitpod

Gitpod is a really neat tool that lets you work with your Git repositories in a web browser-based IDE. Gitpod is offered as a hosted solution or you can self host it. Of course self-hosting is the way to go! Unfortunately it's not as easy (at least right now) as most self-hosted apps to setup but this guide aims to walk you through getting a Gitpod instance setup for yourself.

This guide assumes you already have a Kubernetes cluster setup. I personally setup a cluster using k3s. I setup my custer with one master node (4 CPU cores, 4GBs of RAM and 40GBs of disk space) and 4 worker nodes (each with 8 CPU cores, 16GBs of RAM and 250GBs of disk space). This guide also assumes you're using an external MySQL database, external Docker registry and an external MinIO installation. I should also note that I am using GitHub Enterprise but this should work with GitHub.com and GitLab.

As someone who likes to keep things organized the first thing I did was create a project via Rancher called Gitpod. I also created a namespace, gitpod. I run the following command from my workstation where I've setup kubectl with my Kubernetes cluster configuration.

kubectl create namespace gitpod

You should get the following output:

namespace/gitpod created

Rancher Projects

I then added that namespace to the Gitpod project. Next we need to clone the Gitpod repository to our location workstation. You can put the repository wherever you'd like. I have mine in /home/jimmy/Developer/github.com/gitpod-io/gitpod.

git clone https://github.com/gitpod-io/gitpod

I use VS Code myself on my workstation, but use whatever you're most comfortable with. Open the new 'gitpod' folder in your editor. We need to setup our install!

Open the file charts/values.yaml. I recommend replacing the content of this file with this as this is what was recommended to me. Once replaced, save the file. Now we can start adjusting it and filling in our own information.

On line 4, change it to version: 0.5.0. Next adjust line 5 (hostname: localhost) to your domain name. This would be what you use in your web browser to access your instance of Gitpod.

version: 0.5.0
hostname: mydomain.com

We need to change the imagePrefix value as we're setting up a self-hosted installation. Adjust it as follows:

imagePrefix: eu.gcr.io/gitpod-io/self-hosted/

On line 5 (workspaceSizing), you can adjust your workspace settings. The only thing I adjusted was the limits, I set my memory limit to 4Gi. You can set this to whatever you feel comfortable with.

workspaceSizing:
  requests:
    cpu: "1m"
    memory: "2.25Gi"
    storage: "5Gi"
  limits:
    cpu: "5"
    memory: "4Gi"

Next on line 51 (db) you'll want to fill in your database information. You can use a hostname or IP address here for host.

db:
  host: db.yourdomain.com
  port: 3306
  password: password1234

Next open the secrets/encryption-key.json file and create yourself a new key. I am not sure if this is required but I figured it would be better to set something rather than what is in there just in case. I used this website to generate a string.

Next configure the authProviders block. I am not sure if you can have both GitHub and GitLab at the same time, or have both GitHub and a GitHub Enterprise configurations, you're more than welcome to try it out. However I have GitHub Enterprise so I create an OAuth app and filled out the details. It looks something like this:

authProviders:
  - id: "GitHub-Enterprise"
  host: "githubenterprise.com"
  type: "GitHub"
  oauth:
    clientId: "6g5a657e145y51abc2ff"
    clientSecret: "9819537b4694ee6a46312t2dalw17345f8d5hgt"
    callBackUrl: "https://mydomain.com/auth/github/callback"
    settingsUrl: "https://githubenterprise.com/settings/connections/applications/6g5a657e145y51abc2ff"
  description: "GitHub Enterprise"
  icon: ""

In the branding block I updated each instance of gitpod.io to my domain. Feel free to do the same but it's not required as far as I know.

I updated the serverProxyApiKey with a new string for the same reason as I updated the one in the secrets/encryption-key.json file.

Next we'll update some of the settings in the components section. First up is imageBuilder. Since we have our own registry we need to update the registry block to reflect that. Here's what mine looks like:

imageBuilder:
  name: "image-builder"
  dependsOn:
    - "image-builder-configmap.yaml"
    hostDindData: /var/gitpod/docker
    registryCerts: []
    registry:
    name: registry.mydomain.com
      secretName: image-builder-registry-secret
      path: ""
      baseImageName: ""
      workspaceImageName: ""
      # By default, the builtin registry is accessed through the proxy.
      # If bypassProxy is true, the builtin registry is accessed via <registry-name>.<namespace>.svc.cluster.local directly.
      bypassProxy: false
    dindImage: docker:18.06-dind
    dindResources:
      requests:
        cpu: 100m
        memory: 128Mi
    ports:
      rpc:
        expose: true
        containerPort: 8080
      metrics:
        expose: false
        containerPort: 9500

Under workspace make sure to set the secretName of pullSecret to image-builder-registry-secret:

pullSecret:
  secretName: image-builder-registry-secret

Next under wsSync you can setup the remoteStorage details however it may be some what pointless due to a bug in one of the templates. I'll show you how to get MinIO working after we've deployed the Helm chart. I did fill out the information so once the bug is resolved I already have the settings filled out.

Scroll down to the bottom of the page, you should see sections for docker-registry, minio and mysql. Edit them or replace them so it looks like this:

docker-registry:
  enabled: false

minio:
  enabled: false

mysql:
  enabled: false

Now save your values.yaml file. Next we need to create a secret for your Docker registry.

kubectl create secret docker-registry image-builder-registry-secret --docker-server=registry.mydomain --docker-username=$USERNAME --docker-password=$PASSWORD -n gitpod

Make sure to put the URL of your registry for --docker-server and replace $USERNAME and $PASSWORD with your username and password. Once that is done you should see it on the Registry Credentials tab of the Secrets page within Rancher.

Rancher - Registry Credentials

This next step I am not sure if it's necessary, but I found that if I didn't do it, I had issues. So log into your MySQL server and run these queries:

CREATE USER IF NOT EXISTS "gitpod"@"%" IDENTIFIED BY "$PASSWORD";
GRANT ALL ON `gitpod%`.* TO "gitpod"@"%";

CREATE DATABASE IF NOT EXISTS `gitpod-sessions` CHARSET utf8mb4;
USE `gitpod-sessions`;

CREATE TABLE IF NOT EXISTS sessions (
   `session_id` varchar(128) COLLATE utf8mb4_bin NOT NULL,
   `expires` int(11) unsigned NOT NULL,
   `data` text COLLATE utf8mb4_bin,
   `_lastModified` timestamp(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),
   PRIMARY KEY (`session_id`)
);

CREATE DATABASE gitpod CHARSET utf8mb4;

This creates a MySQL user, 'gitpod' (don't forget to update $PASSWORD in the query with your own password), the gitpod-sessions database with a sessions table inside of it and the gitpod database.

Next we need to create 2 repositories (workspace-images and base-images) within our Docker registry. The only way I could figure out how to do this was to push an image to the registry. I just used something small though I plan on deleting it later so I suppose that doesn't matter. I did using these commands:

docker push registry.mydomain.com/workspace-images/docker-whale:latest
docker push registry.mydomain.com/base-images/docker-whale:latest

Now you should be all set to deloy! First lets add the Gitpod Helm charts repository:

helm repo add gitpod https://charts.gitpod.io
helm dep update

Next lets install Gitpod!

helm upgrade --install gitpod gitpod/gitpod --timeout 60m --values values.yaml -n gitpod

You should see something like this:

Release "gitpod" does not exist. Installing it now.
NAME: gitpod
LAST DEPLOYED: Thu Dec  3 10:43:45 2020
NAMESPACE: gitpod
STATUS: deployed
REVISION: 1
TEST SUITE: None

You can watch each of the workloads come up in Rancher if you'd like. Hopefully everything is green!

Rancher - Gitpod Workloads

Now we've got to do a little fixing of certain things due to bugs with Gitpod. First if you have a multi-worker node cluster we need to fix ws-sync. You can do this however you'd like but I find doing it from within Rancher the easiest. In the row for ws-sync click on the little blue button with 3 dots and click on 'View/Edit YML'. Around line 350 or so we need to change the dnsPolicy and add hostNetwork. Adjust it so it reads as:

      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true

Save it and this will automatically trigger the workload to redeploy. This will help prevent getting the following when trying to load a workspace:

 cannot initialize workspace: cannot connect to ws-sync: cannot connect to ws-sync: cannot connect to workspace sync; last backup failed: cannot connect to ws-sync: cannot connect to workspace sync.

Next we need to fix the MinIO settings in the server workload. Similar to how we edited the YAML for ws-sync we need to do the same for server. Click the little blue button with 3 dots and click on 'View/Edit YAML'. Locate the following which should just have defaults in the value lines:

        - name: MINIO_END_POINT
          value: minio.minio.svc.cluster.local
        - name: MINIO_PORT
          value: "9000"
        - name: MINIO_ACCESS_KEY
          value: accesskey
        - name: MINIO_SECRET_KEY
          value: secretkey

You may only need to update the values for MINIO_ACCESS_KEY and MINIO_SECRET_KEY. I believe I needed to update the value for MINIO_END_POINT as well as it seemed to have the port tacked onto the end which should be removed. Once everything looks good, hit Save and the server workload will redeploy.

At this point you should be all set. Visit https://yourdomain.com which should redirect you to https://yourdomain.com/workspaces/. You can login from there. Once you do it's just a matter of creating new workspaces. This can be done by constructing a URL like https://yourdomain.com/#https://github.com/username/your-repository. If all went well you should see a code editor in your web browser with your Git repository contents!

Other Notes

  • At the time of writing (December 3, 2020) there still appears to be an issue with uploading extensions. I have a thread on the Gitpod community forums for this. Uploading extensions has actually never worked for me in all the time I've been using Gitpod which appears to have been since June of this year.
  • There appears to be an issue with installing extensions from search results. I just noticed this today after someone else posted about it in the Gitpod community forums.
  • I have my Kubernetes cluster sitting behind Traefik which provides Gitpod with SSL certs.

Resources

MacBook Pro M1

As someone who is addicted to playing with new technology it's no surprise I picked up a new MacBook Pro M1 last Friday. I wanted to give it some time before I wrote about it so I figured ~1 week was enough time to form a decent opinion.

The magic started as soon as I took it out of the box and lifted the lid. It automatically booted up! That was really cool and I wish I could understand how it did that.

I had been pondering getting the new laptop with the new M1 chip, 8GBs of RAM and a 512GB SSD for a few days. My current MacBook Pro has 16GBs of RAM and struggles to keep up with me from time to time. However in the end, I figured I had some time to return the new laptop should I have any issues and re-order a new one with 16GBs of RAM. This was my biggest concern in getting the new MacBook Pro. However after one week of pushing it with all my work, it hasn't skipped a beat. It has handled everything I've thrown at it with no problems at all. I have Safari running with ~25 tabs, Hyper.js, Screen Sharing, Mail, Things, multiple VS Code workspaces, Messages, Things, Discord, Mastonaut, 1Password, Terminal, and Transmit open. I've been compiling things in Homebrew, running PHP unit tests, and it's been perfectly fine. No slow downs, no spinning rainbow wheel. The only time when I ran into an issue was when I managed to get some errant PHP processes. There was about 10 of them running using a ton of CPU usage and causing slow down for me. This only happened once though.

The new SoC design seems to really work well and meets and probably exceeds my expectations. It definitely allows things to run and access resources very quickly!

Another concern of mine was the keyboard. The reason why I am still using a MacBook Pro from early 2015 is because I love the keyboard. Since the recent redesigns that have utilized the butterfly mechanism I couldn't stand the keyboards. They were absolutely horrific. However the new resdesign which appears to be in the iPad Pro keyboard and the newer laptops is MUCH better. I had ended up going to Best Buy around 2 weeks ago and tried out the keyboard on one of the newer Intel MacBook Pros and was much more impressed. As much as I am not a huge fan of Best Buy they're still open which allowed me the opportunity to try out the newer keyboard. Apple stores are still closed here - you can only make appointments for picking up items, you can't go into the stores as before to try things out unfortunately (DAMN COVID). I've been typing away constantly on my new MacBook Pros keyboard, I love it. It's smooth and dare I say, soft! There's travel between pressing the key and it bottoming out. It feels much more comfortable and responsive then the previous design.

Similar to Apple's transistion to Intel this transistion also provides a "compatibility layer" Rosetta 2, which allows you to run x86 applications on the new M1 chip (ARM64). So far I haven't had a single issue running any of my applications. There's probably 2-3 apps I use currently that use Rosetta 2. I haven't noticed any slow down within the apps, and in fact they seem to start up with the same speed as native apps. I believe the only non-native apps I use on a day to day are Hyper.js, 1Password and VS Code. I do know that 1Password and VS Code are working on building native apps, though I am not sure about Hyper.js. I would think they should be though, and I can't imagine it would be difficult to update. I believe it runs on Electron.js so they just need to swap in an ARM64 build of that and perhaps a few other tweaks.

I'm not sure how much more I have to say about this laptop. It's an excellent upgrade from my early 2015 MacBook Pro (which I still have to use for work 😭). Even with 8GBs of RAM, I've had no issues! It's light in weight (so I find myself taking it everywhere), the keyboard is pleasent to use and all applications I use have had no issues thus far. The only real trouble I've come across is with certain applications or libraries failing to install via Homebrew. This is generally due to them not being compatible with macOS Big Sur or ARM64 yet. The Homebrew team has been working hard to ensure compatibility with both though. Overall, I love this new laptop!

As I don't have Composer installed directly on my server with my many Docker containers of my websites, and I don't run composer update in the Docker image(s) for my websites I was able to use the Composer Docker image to update packages by running the Composer image within the directory of my website(s). It worked something like this:

cd /home/jimmy/public_html/jimmyb.ninja
docker run --rm --interactive --tty --volume $PWD:/app composer update

This will mount the directory you're presently in into the Composer image and then will run composer update command. The result is updated packages!

There was one website I have which has a Composer package that required bcmath, which of course I didn't have installed as well as wasn't available in the Docker image, so I was able to get around this doing this instead:

cd /home/jimmy/public_html/jimmyb.ninja
docker run --rm --interactive --tty --volume $PWD:/app composer update --ignore-platform-reqs

Hopefully this helps someone else out!

This past weekend I decided to move some VMs from one Proxmox server to another. Thankfully the process was very easy and could be done in under 10 commands! I utilized a 1TB external USB to store my backed up VMs on on my source system.

Let's get started! Make sure the source server can reach the destination server via SSH. First move into the directory where you want to put your backed up VMs in. For me this was /mnt/storage. Then start taking backups of your VM(s).

vzdump 100

The number 100 in the above example is the ID of the VM. Once the backup has been completed we'll want to copy it over to the desintation server.

scp vzdump-qemu-100-2020_11_00-00_14_30.vma root@192.168.1.10:/mnt/storage2/vzdump-qemu-100-2020_11_00-00_14_30.vma

You can adjust the path to where you're sending it on the destination server. I used another 1TB USB drive on my destination server as well. Once the transfer is complete we need to restore it! We run this on the destination server:

cd /mnt/storage2
qmrestore vzdump-qemu-100-2020_11_00-00_14_30.vma 110

First make sure you go into the directory where you transferred the backup too. Next, the last number in the 2nd command is going to be the new ID of the VM. Since I already had some VMs on my destination server I just picked the next ID.

Of note, depending on the size of the backups it can take some time to backup, transfer between source and destination as well as restore. However, I didn't hit any snags and everything went smoothly!

About a month ago someone posted a link to their blog article on r/self-hosted about setting up your own self-hosted Kubernetes GitHub Runners. Around this time I had just gotten my GitHub Enterprise instance working with actions and such so I was quite excited to see this.

Originally I had attempted to install a self-hosted GitHub runner on one of my servers, but because I was missing node it didn't run properly. I then came across the source which GitHub provides on setting up their runners which they deploy to users of GitHub.com. However these are full on Ubuntu environments with everything you could think of installed within in. If I recall they were about 80-90GBs in size. Nonetheless I ended up setting up a couple of them as VMs. I quickly realized maintaining and keeping them updated would be another task I really didn't have the time for. This method didn't really make sense for me especially since most of the stuff I was doing with GitHub Actions was being performed in Docker.

Thankfully this kind fellow put together this guide which walks you through setting up GitHub runners in a Kubernetes environment. I'm a completely newb to Kubernetes so this was an excellent opportunity for me to learn some more! While I follow most of the guide there were a couple things I did differently. In this article I'll go from nothing to running runners in your Kubernetes cluster!

I opted to go with k3s because it's something I am familiar with setting up and using. It's really easy to install and setup! I first setup 3 Ubuntu 20.04 VMs on my Proxmox server. I allocated 2 cores, 40GBs of disk space and 4GBs of RAM to what would be my master node. My other 2 nodes consisted of 8 cores, 16GBs of RAM and 250GBs of disk space each. This may be overkills, but I had the resources to spare on the system. Make sure you disable swap on your systems. I did this by editing the /etc/fstab file and commenting out the line for swap.

Once each VM was setup, I made to run apt update && apt upgrade on each one to ensure everything was as up to date as possible. I also like to use dpkg-reconfigure tzdata to set the timezone for each VM to my timezone.

Next get Docker installed on your master and worker nodes.

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
sudo apt install docker-ce
sudo systemctl status docker
sudo usermod -aG docker $LINUX_USERNAME

I personally use PostgreSQL with k3s. You can choose to use whatever option you'd like. There's a few to pick from. Here's a couple quick commands I used to setup my PostgreSQL user and database:

CREATE USER k3s WITH ENCRYPTED PASSWORD '$PASSWORD';
CREATE DATABASE k3s;
GRANT ALL PRIVILEGES ON DATABASE k3s TO k3s;

Next install k3s on your master node:

curl -sfL https://get.k3s.io | sh -s - --datastore-endpoint 'postgres://$USERNAME:$PASSWORD@ip.add.ress:5432/k3s?sslmode=disable' --write-kubeconfig-mode 644 --docker --disable traefik --disable servicelb

This will install k3s in master node mode, uses Docker instead of containerd, and disables Traefik and the service load balancer.

Grab your token which will be needed to set up the worker nodes. You can find the token at /var/lib/rancher/k3s/server/node-token.

On your work nodes, get k3s installed in agent mode by using these commands:

export K3S_URL=https://master-node-ip-address-or-url:6443
export K3S_TOKEN=K1009809sad1cf2317376e1fc892a7f48983939442479i987sa89ds::server:e28d3875948350349283927498324
curl -fsL https://get.k3s.io | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -s agent --docker --disable traefik --disable servicelb

This will set your master node URL and token to a variable and then utilize those variables to install k3s. You'll notice I specify -s agent which tells the install to install k3s in agent mode. Again I disable Traefik and the service load balancer. Given that the GitHub runners don't need to get incoming traffic I found that have Traefik and the service load balancer unnecessary.

If everything went well, you can run kubectl get nodes from your master node and it should show your 3 nodes:

jimmy@kubemaster-runners-octocat-ninja:~$ kubectl get nodes
NAME                               STATUS   ROLES    AGE     VERSION
kubemaster-runners-octocat-ninja   Ready    master   6d17h   v1.18.9+k3s1
kubenode1-runners-octocat-ninja    Ready    <none>   6d17h   v1.18.9+k3s1
kubenode2-runners-octocat-ninja    Ready    <none>   6d16h   v1.18.9+k3s1

I also like to run this command to ensure that no jobs are scheduled on my master node, it's not required though:

kubectl taint node $masterNode k3s-controlplane=true:NoSchedule

This part is also not required but I'm a Kubernetes newbie so having a GUI is helpful. First install helm3:

curl -O https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
bash ./get-helm-3 

You can confirm your helm version by using helm version. Next we need to add the Rancher charts repository:

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

This will install the stable version of the charts, but you can do latest as well. Next create a namespace for Rancher:

kubectl create namespace cattle-system

Next we'll install Rancher using this command:

helm install rancher rancher-latest/rancher \
    --namespace cattle-system \
    --set hostname=rancher.octocat.ninja \
    --set tls=external

You can use kubectl -n cattle-system rollout status deploy/rancher to keep an eye on the deployment. I think it took ~2 minutes, though probably less for it to install for me. Once that is done, I assigned an external IP to the rancher service:

kubectl patch svc rancher -p '{"spec":{"externalIPs":["192.168.1.5"]}}' -n cattle-system

Now you'll obviously want to make sure whatever IP you assign is routed to the system. Next if you have a domain pointing to the system you can use that to access Rancher or you can use the IP. Once you're in Rancher, I recommend creating a new project, I made one called 'GitHub Runners'. Next create a new namespace called docker-in-docker. You can do this from the command line or from within Rancher.

kubectl create ns docker-in-docker

If you did it on the command line, you can use Rancher to move the new namespace into your Project. Here's what my project looks like (don't worry about the other namespace for now):

Rancher - GitHub Runners Project

Next we're going to create a PersistentVolumeClaim. This can be done on the command line or in Rancher. I opted to go the Rancher route since it was easier. From the Projects/Namespaces page, click on the title of the project:

Rancher - GitHub Runners - Click Project Title

From this page click on the 'Import YAML' button:

Rancher - GitHub Runners - Click Import YAML

Paste in the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dind
  namespace: docker-in-docker
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi

Make sure you've selected the 'Namespace: Import all resources into a specific namespace' radio button, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

Rancher - Github Runners - Import YAML

You can adjust the storage size to whatever you feel comfortable with. As given though it will allow 50Gi of space for your Docker in Docker pod. You can always enter into the container to clear out unused Docker images and such.

Hit the Import button! On the 'Volumes' tab you should now see your volume!

Rancher - GitHub Runners - Volumes

Next we'll create a deployment for Docker in Docker. Again, I used the 'Import YAML' button for this. Make sure you have the 'Namespace: Import all resources into a specific namespace' radio button checked, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dind
  namespace: docker-in-docker
spec:
  replicas: 1
  selector:
    matchLabels:
      workload: deployment-docker-in-docker-dind
  template:
    metadata:
      labels:
        workload: deployment-docker-in-docker-dind
    spec:
      containers:
      - command:
        - dockerd
        - --host=unix:///var/run/docker.sock
        - --host=tcp://0.0.0.0:2376
        env:
        - name: DOCKER_TLS_CERTDIR
        image: docker:19.03.12-dind
        imagePullPolicy: IfNotPresent
        name: dind
        resources: {}
        securityContext:
          privileged: true
          readOnlyRootFilesystem: false
        stdin: true
        tty: true
        volumeMounts:
        - mountPath: /var/lib/docker
          name: dind-storage
      volumes:
      - name: dind-storage
        persistentVolumeClaim:
          claimName: dind

In a nutshell this will setup a pod with a container that runs the Docker in Docker image. It tells the dockerd daemon inside the container where to put the socket file and to listen on TCP 0.0.0.0 on port 2376. Also by specifying DOCKER_TLS_CERTDIR as an empty environment variable we tell it not to use TLS. I along with the author from the blog article have not specified any resources. As this server pretty much only handles my GitHub Runners and one other small Kubernetes cluster I didn't feel the need to constrain my pods. You're more than welcome to set up resources, but it's not something I cover here. At the bottom of the above YAML you'll notice I also specify my persistent volume claim I previously made. This allows this deployment to utilize that volume. Hit Import and you should see your deployment show up in the Rancher interface!

Rancher - GitHub Runners - DIND Deployments

Next I build a Docker image which contained the GitHub Runner application itself. You can use the original blog authors Docker image, or you can build one yourself and deploy it to your own private registry or Docker Hub. My Dockerfile is as follows:

FROM debian:buster-slim

ENV GITHUB_PAT ""
ENV GITHUB_OWNER ""
ENV GITHUB_REPOSITORY ""
ENV RUNNER_WORKDIR "_work"
ENV RUNNER_LABELS ""

RUN apt-get update \
    && apt-get install -y \
        curl \
        sudo \
        git \
        jq \
        iputils-ping \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* \
    && useradd -m github \
    && usermod -aG sudo github \
    && echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers \
    && curl https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz --output docker-19.03.9.tgz \
    && tar xvfz docker-19.03.9.tgz \
    && cp docker/* /usr/bin/

USER github
WORKDIR /home/github

RUN GITHUB_RUNNER_VERSION=$(curl --silent "https://api.github.com/repos/actions/runner/releases/latest" | jq -r '.tag_name[1:]') \
    && curl -Ls https://github.com/actions/runner/releases/download/v${GITHUB_RUNNER_VERSION}/actions-runner-linux-x64-${GITHUB_RUNNER_VERSION}.tar.gz | tar xz \
    && sudo ./bin/installdependencies.sh

COPY --chown=github:github entrypoint.sh ./entrypoint.sh
RUN sudo chmod u+x ./entrypoint.sh

ENTRYPOINT ["/home/github/entrypoint.sh"]

A couple things to note here. I also install Docker since we'll be using this to build and publish our own Docker images via GitHub actions. Also note that this should automatically fetch the latest version of the GitHub Runners and use them. I believe the runner daemon itself checks for updates every few days. I had to modify my entrypoint.sh slightly from the default since I am using GitHub Enterprise. Once my image was built, I pushed it to my private registry server.

Next we'll create a new namespace for our runners. This can be done on the command line via:

kubectl create ns github-actions

Again, I recommend putting this new namespace in your GitHub Runners project in Rancher. Organization is awesome! Once you've done that we'll need to create a new deployment for the runner(s)! I again utilized Rancher and the wonderful 'Import YAML' button to do this. This time however, make sure under the 'Namespace' dropdown menu that you select the 'github-actions' option. Make sure you set the right Docker image as well (image: repository/github-actions-runner:latest is just a place-holder below)!

apiVersion: apps/v1
kind: Deployment
metadata:
  name: github-runner
  namespace: github-actions
  labels:
    app: github-runner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: github-runner
  template:
    metadata:
      labels:
        app: github-runner
    spec:
      containers:
      - name: github-runner
        image: repository/github-actions-runner:latest
        env:
        - name: DOCKER_HOST
          value: tcp://dind.docker-in-docker:2376
        - name: GITHUB_OWNER
          value: $GITHUB_USERNAME
        - name: GITHUB_REPOSITORY
          value: $GITHUB_REPOSITORY_NAME
        - name: GITHUB_PAT
          valueFrom:
            secretKeyRef:
              name: github-actions-token
              key: pat

Replace $GITHUB_USERNAME and $GITHUB_REPOSITORY_NAME with your information.

Create a Personal Access Token for yourself within GitHub. This option can be found at Settings > Developer Settings > Personal Access Tokens. I just checked off 'Repo' (which also select it's sub-options too), then click Generate Token.

GitHub - Personal Access Token

You'll get a string of characters which is your token. Copy this, and we'll use it to create a secret within Kubernetes. You can use the Rancher UI to do this, with our favorite 'Import YAML' button! Make sure the 'github-actions' namespace is selected!

apiVersion: v1
stringData:
  pat: $YOUR_GITHUB_PERSONAL_ACCESS_TOKEN
kind: Secret
metadata:
  name: github-actions-token
  namespace: github-actions
type: Opaque

Once you're done your new deployment should show up in the 'github-actions' namespace area!

Rancher - GitHub Runners - Project

The runner should also automatically show up under your repositories settings > action page!

GitHub > Settings > Actions

I've setup 4-5 runners for the time being, but I know I will have a lot more for my other projects! One thing I do wish was that runners weren't repository specific, or that they could just be deployed whenever an Action called for them. It's seems kind of silly to have to have at least one dedicated runner per repository. You'd think a runner could handle many repositories. For the time being though, this is an excellent solution for self-hosters who use GitHub Actions!

Resources

Upgrading My 2009 MacPro 5,1 to macOS Catalina

2009 MacPro with macOS Catalina

The other day I finally had a chance to look back into updating my 2009 MacPro to macOS Catalina. When I had done some research previously it appeared that it wouldn't be possible. To my excitement it seems they have figured out how to get it working though!

I would highly recommend this guide if you're looking to get your 2009 MacPro running macOS Catalina.

Of note, I had an older BootROM firmware (138.0.0.0.0) so I did have to get my MacPro updated to 144.0.0.0.0 before I could proceed. Thankfully it was super easy. I just had to snag the macOS Mojave installer which allowed me to update my system. You simply download it and open it to which it should advise you that a firmware update is needed.

Once I had my firmware updated, I went back to the OpenCore on the Mac Pro guide. One thing that frustrated me a little bit was that this required two disks which meant I would be starting fresh which I really didn't want to. It also meant that I would be moving to a spinning disk as I didn't have any spare SSDs. However, in Part I, Step 4 of the guide, instead of selecting the blank drive, I selected my SSD instead. I figured it would give me an error if it wouldn't work there. Thankfully, no errors popped up or warnings that said macOS Catalina couldn't be installed in the selected location. About 15-20 minutes later it finished and proceeded to reboot off my SSD where macOS Mojave had been installed and where all my files and stuff was to macOS Catalina!

Once that was done I went back to Steps 2 and 3 and performed those actions on my SSD. This effectively installed OpenCore to my SSD instead and reset my SSD as the main boot device. I stupidly followed the 'Toggle the VMM flag' instructions on Step 5 which I shouldn't have done until after I updated at 10.15.7 (it looks like the base install of macOS Catalina started me off at 10.15.6), so I did have to go back and untoggle the VMM flag again.

Under Part II of the guide, I did not do 'Making External Drives Internal', or 'Enabling the Graphical Boot Picker' steps. I figured they weren't that important (for now).

One other side note is that my CPU shows up as an Intel Core i3 in the About This Mac window. This may be due the 'Hybridization' step in the guide, but I am not 100% positive. I believe at one point my CPU did show properly in macOS Catalina. It's not a big issue, just cosmetic.

I'm quite excited that I have the latest version of macOS running on my 2009 MacPro, it's brought and expanded more life into the system. I am planning a CPU upgrade (Intel Xeon X5690) as well as I'd like to get 64GBs of RAM in it. Perhaps at that point it could even become my daily driver!

I'll be back with another post once I get some new hardware!

Resources

I wanted to quick share this with everyone. I found it worked over using Disk Utility to restore a .dmg disk image to a USB thumb drive.

sudo /usr/sbin/asr --noverify --erase --source source --target target

Or for example:

sudo /usr/sbin/asr --noverify --erase --source /Users/Shared/yosemite.dmg --target /Volumes/Untitled

You might also find this useful to ensure it's a bootable drive as well:

sudo bless --mount /Volumes/TheVolume --setBoot

Resources

A Ghostbin (Spectre) installation doesn't really require a lot of resources. I am currently running it on a system with 4x 2.26GHz CPUs, 8GBs of RAM and a 120GB disk. I've done it on much less though.

Installing Ghostbin (Spectre)

  1. Install your operating system. I used Ubuntu 20.04 Server.
  2. Install Go:
    cd /usr/local
    wget https://dl.google.com/go/go1.14.linux-amd64.tar.gz
    tar -C /usr/local -xzf go1.14.linux-amd64.tar.gz

    Add to the bottom of your /etc/profile file with:

    export PATH=$PATH:/usr/local/go/bin

    You can either also run that at your command prompt or logout and log back in.

  3. Install Mercurial and Python Pygments:
    apt install mercurial python3-pygments
  4. Install ansi2html:
    apt install python-pip
    pip install ansi2html
  5. Install Git, I like to compile it myself so I get the latest version:
    cd /usr/local/src
    apt install autoconf libssl-dev zlib1g-dev libcurl4-openssl-dev tcl-dev gettext
    wget https://github.com/git/git/archive/v2.22.1.tar.gz
    tar zxvf v2.22.1.tar.gz
    cd git-2.22.1/
    make configure
    ./configure --prefix=/usr
    make -j2
    make install
  6. I recommend creating a new user to run your GhostBin code under:
    adduser ghostbin
  7. You should also set a password on the new user account using passwd ghostbin.
  8. Login as your new user account and add the following to your ~/.bashrc file:
    export GOPATH=$HOME/go
  9. Save and exit the file and run source ~/.bashrc.
  10. Next obtain the source code for GhostBin (login as your new user first):
    mkdir -p ~/go/src
    cd $HOME/go/src
    mkdir github.com
    cd github.com
    git clone https://github.com/DHowett/spectre.git
    cd spectre/
  11. At this point your full path should be something like - /home/ghostbin/go/src/github.com/spectre.
  12. Run go get.
  13. Run go build.
  14. Run which pygmentize. It should return /usr/bin/pygmentize. If not, no problem, just copy the path.
  15. You'll also want to run which ansi2html which should return /usr/local/bin/ansi2html. Again, if it doesn't no big deal, just copy the path.
  16. Update the languages.yml file with the paths for pygmentize which should be on line 6. Also update the path for ansi2html which should be on line 23. Save and exit. Here's my languages.yml up to line 25 to give you an example:
    formatters:
      default:
        name: default
        func: commandFormatter
        args:
        - /usr/bin/pygmentize
        - "-f"
        - html
        - "-l"
        - "%LANG%"
        - "-O"
        - "nowrap=True,encoding=utf-8"
      text:
        name: text
        func: plainText
      markdown:
        name: markdown
        func: markdown
      ansi:
        name: ansi
        func: commandFormatter
        args:
        - /usr/local/bin/ansi2html
        - "--naked"
      iphonesyslog:
  17. Next we'll need to build a CSS file which will give color to the pastes:
    pygmentize -f html -S $STYLE > public/css/theme-pygments.css

    You can choose from several styles/color themes:

    - monokai
    - manni
    - rrt
    - perldoc
    - borland
    - colorful
    - default
    - murphy
    - vs
    - trac
    - tango
    - fruity
    - autumn
    - bw
    - emacs
    - vim
    - pastie
    - friendly
    - native

    I used monokai:

    pygmentize -f html -S monokai > public/css/theme-pygments.css
  18. If all went well all that is left to do is to start the service:
    ./ghostbin
  19. Here's an screenshot of my install: Ghostbin Installation
  20. I would also recommend running the binary with the --help flag:
    $ ./ghostbin --help
    Usage of ./ghostbin:
    -addr string
        bind address and port (default "0.0.0.0:8080")
    -alsologtostderr
        log to standard error as well as files
    -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
    -log_dir string
        If non-empty, write log files in this directory
    -logtostderr
        log to standard error instead of files
    -rebuild
        rebuild all templates for each request
    -root string
        path to generated file storage (default "./")
    -stderrthreshold value
        logs at or above this threshold go to stderr
    -v value
        log level for V logs
    -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging

    This allows you to see flags you can run with the binary. I run mine as such:

    ./ghostbin -logtostderr

    This will just log the errors to the screen.

Setting Up GhostBin w/ Nginx

  1. Install Nginx:
    apt install nginx
  2. Create a Nginx configuration file for GhostBin:

    nano /etc/nginx/sites-available/ghostbin.conf
    # Upstream configuration
    upstream ghostbin_upstream {  
        server 0.0.0.0:8080;
        keepalive 64;
    }
    
    # Public
    server {  
        listen 80;
        server_name ghostbin.YOURDOMAIN.com; # domain of my site
    
        location / {
            proxy_http_version 1.1;
            proxy_set_header   X-Real-IP        $remote_addr;
            proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_set_header   X-NginX-Proxy    true;
            proxy_set_header   Host             $http_host;
            proxy_set_header   Upgrade          $http_upgrade;
            proxy_redirect     off;
            proxy_pass         http://ghostbin_upstream;
        }
    }

    You'll obviously want to update the server_name bit. Save and exit the file.

  3. Next we need to make a symlink so Nginx knows to load the configuration:
    cd ../site-enabled
    ln -s ../sites-available/ghostbin.conf .
  4. Restart the Nginx service:
    systemctl restart nginx

    Your GhostBin site should now be available at http://ghostbin.YOURDOMAIN.com!

Notes

  • Whenever I start up the binary I see:

    E0815 21:19:58.915843   19895 main.go:773] Expirator Error: open expiry.gob: no such file or directory

    This doesn't appear to be an issue and I haven't had any issues with using GhostBin thus far. There seems to be some code in main.go referencing it. It looks related to the expiration of the paste, but I don't know GoLang so I can't be sure.

    pasteExpirator = gotimeout.NewExpirator(filepath.Join(arguments.root, "expiry.gob"), &ExpiringPasteStore{pasteStore})
  • It looks like the GhostBin repository has been renamed to 'spectre'. It looks like this was done to "de-brand" it for people who want to run it themselves and separate it from GhostBin.com where I believe the developer run their own copy. See this commit.
  • You should definitely set up your install with Let's Encrypt for SSL.
  • It seems like the binary was renamed back to ghostbin from spectre. Why? I don't know. I also noticed there is a binary in /home/ghostbin/go/bin/ but it doesn't seem to work? 🤷🏼‍♂️

Looking to set up your own Pleroma instance? This guide should walk you through everything you need to do to make it happen. Of note, this assumes you have some familiarity with Docker and working with it, PostgreSQL and Linux. I also utilize Traefik to handle proxying requests.

First, most of this guide is the same thing that can be found here with a few changes. The reason I don't just tell you to go to that guide is that I have an already existing PostgreSQL installation along with a pre-existing network setup in my Docker stack.

docker-compose.yml

To start with this is my docker-compose.yml file:

services:
  pleroma:
    build: .
    image: pleroma
    container_name: pleroma
    hostname: pleroma.mydomain.com
    environment:
      - TZ=${TZ}
      - UID=${PUID}
      - GID=${PGID}
    ports:
      - 4001:4000
    networks:
      static:
        ipv4_address: 172.18.0.34
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/pleroma/uploads:/pleroma/uploads
    depends_on:
      - postgres
    restart: unless-stopped
    cap_add:
      - SYS_PTRACE
  postgres:
    image: postgres:9.6
    container_name: postgres
    hostname: postgres.mydomain.com
    environment:
      - PGID=${PGID}
      - PUID=${PUID}
      - TZ=${TZ}
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/postgresql/pg_data:/var/lib/postgresql/data
      - ${DOCKERCONFDIR}/postgresql/root:/root
    ports:
      - 5432:5432
    networks:
      static:
          ipv4_address: 172.18.0.14
    restart: unless-stopped
    cap_add:
      - SYS_PTRACE
    networks:
      static:
      driver: bridge
      ipam:
        config:
          - subnet: 172.18.0.0/16
          gateway: 172.18.0.1
version: "2.4"

While this isn't the full configuration file, these are the parts which allow Pleroma to function. As noted above I already have a PostgreSQL instance and a pre-existing network created for my Docker stack. You should also note that I use some variables here, mostly in the environment and volumes sections for each container. You can use them too, or swap them out for their "real" values. I'd recommend using the variables with an .env file, but it's up to you.

PostgreSQL Setup

The first thing I do is create a new PostgreSQL user for Pleroma:

psql -U superuser -h localhost -p 5432

You'll want to change out 'superuser' for a user which can create users within PostgreSQL. The follow will create a database for Pleroma, a PostgreSQL user and allow access to the new database for that user.

CREATE DATABASE pleroma;
CREATE USER pleroma WITH ENCRYPTED PASSWORD 'PASSWORD-HERE';
GRANT ALL PRIVILEGES ON DATABASE pleroma TO pleroma;

It appears the database setup processes creates an EXTENSION, so you'll need to provide your PostgreSQL user with the superuser permission. You can do so by running the following:

ALTER USER pleroma WITH SUPERUSER;

File System Setup

Once setting up PostgreSQL has been completed, you'll want to setup your uploads folder. I've set mine up at /home/jimmy/.docker/config/pleroma/uploads. Next, I setup a folder where I will build the Pleroma Docker image. I've done this at /home/jimmy/.docker/builds/pleroma. Within that directory, I create a new Dockerfile and place the follow contents into it:

FROM elixir:1.9-alpine

ENV UID=911 GID=911 \
    MIX_ENV=prod

ARG PLEROMA_VER=develop

RUN apk -U upgrade \
    && apk add --no-cache \
       build-base \
       git

RUN addgroup -g ${GID} pleroma \
    && adduser -h /pleroma -s /bin/sh -D -G pleroma -u ${UID} pleroma

USER pleroma
WORKDIR pleroma

RUN git clone -b develop https://git.pleroma.social/pleroma/pleroma.git /pleroma \
    && git checkout ${PLEROMA_VER}

COPY config/secret.exs /pleroma/config/prod.secret.exs

RUN mix local.rebar --force \
    && mix local.hex --force \
    && mix deps.get \
    && mix compile

VOLUME /pleroma/uploads/

CMD ["mix", "phx.server"]

This Dockerfile is different than the one provided by the above linked GitHub respository. The difference is the first line. I am utilizing a newer version of elixer which is required. If you do not use this you will likely see the following error in your logs when trying to startup your Pleroma instance:

15:38:00.284 [info] Application pleroma exited: exited in: Pleroma.Application.start(:normal, [])
    ** (EXIT) an exception was raised:
        ** (RuntimeError) 
            !!!OTP VERSION WARNING!!!
            You are using gun adapter with OTP version 21.3.8.15, which doesn't support correct handling of unordered certificates chains. Please update your Erlang/OTP to at least 22.2.
            (pleroma) lib/pleroma/application.ex:57: Pleroma.Application.start/2
            (kernel) application_master.erl:277: :application_master.start_it_old/4
15:38:22.720 [info]  SIGTERM received - shutting down

Next create a config folder within your builds/pleroma directory. For example, my full path is /home/jimmy/.docker/builds/pleroma/config. Within there create a file called secret.exs. Open this file in your favorite text editor and paste in the following:

use Mix.Config

config :pleroma, Pleroma.Web.Endpoint,
   http: [ ip: {0, 0, 0, 0}, ],
   url: [host: "pleroma.domain.tld", scheme: "https", port: 443],
   secret_key_base: "<use 'openssl rand -base64 48' to generate a key>"

config :pleroma, :instance,
  name: "Pleroma",
  email: "admin@email.tld",
  limit: 5000,
  registrations_open: true

config :pleroma, :media_proxy,
  enabled: false,
  redirect_on_failure: true,
  base_url: "https://cache.domain.tld"

# Configure your database
config :pleroma, Pleroma.Repo,
  adapter: Ecto.Adapters.Postgres,
  username: "pleroma",
  password: "pleroma",
  database: "pleroma",
  hostname: "postgres",
  pool_size: 10

Ensure that you update the host in the url line, the secret_key_base, the name, email, and the database information. Save and exit the file.

Building The Pleroma Docker Image

Alright, so with your Dockerfile and Pleroma configuration files in place we need to build the image! While in the same directory as your Dockerfile, run the following command:

docker build -t pleroma .

This may take a few minutes to complete. Once completed we need to setup the database within PostgreSQL. This is another reason I am putting together this guide, because using the command from the other GitHub repository will not work.

docker run --rm -it --network=main_static pleroma mix ecto.migrate

That will take 30-45 seconds to run. Once completed we need to generate our web push keys. Use the following command in order to do so:

docker run --rm -it --network=main_static pleroma mix web_push.gen.keypair

Copy the output from the above command, and place it at the bottom of your config/secret.exs file. Now we need to rebuild the Pleroma Docker image again with this new configuration, so do so using:

docker build -t pleroma .

You should be all set now! You just need to run docker-compose up -d and it should get everything started up. If all went well you should see something similar to:

Pleroma Instance

Updating Your Pleroma Instance

So you've got your instance up and running but how about keeping it up to date! Fortunately this is relatively easy as well! Just go into the directory with your Pleroma Dockerfile and run the following commands:

docker stop pleroma
docker build --no-cache -t pleroma .
docker run --rm -it --network=main_static pleroma mix ecto.migrate

Now run docker-compose up -d and a new container will be created with your newly built image!

Resources

Made with by Jimmy B.