Updating Composer Packages With Docker

As I don't have Composer installed directly on my server with my many Docker containers of my websites, and I don't run composer update in the Docker image(s) for my websites I was able to use the Composer Docker image to update packages by running the Composer image within the directory of my website(s). It worked something like this:

cd /home/jimmy/public_html/jimmyb.ninja
docker run --rm --interactive --tty --volume $PWD:/app composer update

This will mount the directory you're presently in into the Composer image and then will run composer update command. The result is updated packages!

There was one website I have which has a Composer package that required bcmath, which of course I didn't have installed as well as wasn't available in the Docker image, so I was able to get around this doing this instead:

cd /home/jimmy/public_html/jimmyb.ninja
docker run --rm --interactive --tty --volume $PWD:/app composer update --ignore-platform-reqs

Hopefully this helps someone else out!

Migrating Proxmox VMs to a Different Server

This past weekend I decided to move some VMs from one Proxmox server to another. Thankfully the process was very easy and could be done in under 10 commands! I utilized a 1TB external USB to store my backed up VMs on on my source system.

Let's get started! Make sure the source server can reach the destination server via SSH. First move into the directory where you want to put your backed up VMs in. For me this was /mnt/storage. Then start taking backups of your VM(s).

vzdump 100 --dumpdir /mnt/storage --tmpdir /mnt/storage

The number 100 in the above example is the ID of the VM. Once the backup has been completed we'll want to copy it over to the desintation server.

scp vzdump-qemu-100-2020_11_00-00_14_30.vma root@192.168.1.10:/mnt/storage2/vzdump-qemu-100-2020_11_00-00_14_30.vma

You can adjust the path to where you're sending it on the destination server. I used another 1TB USB drive on my destination server as well. Once the transfer is complete we need to restore it! We run this on the destination server:

cd /mnt/storage
qmrestore vzdump-qemu-100-2020_11_00-00_14_30.vma 110

First make sure you go into the directory where you transferred the backup too. Next, the last number in the 2nd command is going to be the new ID of the VM. Since I already had some VMs on my destination server I just picked the next ID.

Of note, depending on the size of the backups it can take some time to backup, transfer between source and destination as well as restore. However, I didn't hit any snags and everything went smoothly!

Setting Up Self Hosted GitHub Runners in Kubernetes/k3s

About a month ago someone posted a link to their blog article on r/self-hosted about setting up your own self-hosted Kubernetes GitHub Runners. Around this time I had just gotten my GitHub Enterprise instance working with actions and such so I was quite excited to see this.

Originally I had attempted to install a self-hosted GitHub runner on one of my servers, but because I was missing node it didn't run properly. I then came across the source which GitHub provides on setting up their runners which they deploy to users of GitHub.com. However these are full on Ubuntu environments with everything you could think of installed within in. If I recall they were about 80-90GBs in size. Nonetheless I ended up setting up a couple of them as VMs. I quickly realized maintaining and keeping them updated would be another task I really didn't have the time for. This method didn't really make sense for me especially since most of the stuff I was doing with GitHub Actions was being performed in Docker.

Thankfully this kind fellow put together this guide which walks you through setting up GitHub runners in a Kubernetes environment. I'm a completely newb to Kubernetes so this was an excellent opportunity for me to learn some more! While I follow most of the guide there were a couple things I did differently. In this article I'll go from nothing to running runners in your Kubernetes cluster!

I opted to go with k3s because it's something I am familiar with setting up and using. It's really easy to install and setup! I first setup 3 Ubuntu 20.04 VMs on my Proxmox server. I allocated 2 cores, 40GBs of disk space and 4GBs of RAM to what would be my master node. My other 2 nodes consisted of 8 cores, 16GBs of RAM and 250GBs of disk space each. This may be overkills, but I had the resources to spare on the system. Make sure you disable swap on your systems. I did this by editing the /etc/fstab file and commenting out the line for swap.

Once each VM was setup, I made to run apt update && apt upgrade on each one to ensure everything was as up to date as possible. I also like to use dpkg-reconfigure tzdata to set the timezone for each VM to my timezone.

Next get Docker installed on your master and worker nodes.

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
sudo apt install docker-ce
sudo systemctl status docker
sudo usermod -aG docker $LINUX_USERNAME

I personally use PostgreSQL with k3s. You can choose to use whatever option you'd like. There's a few to pick from. Here's a couple quick commands I used to setup my PostgreSQL user and database:

CREATE USER k3s WITH ENCRYPTED PASSWORD '$PASSWORD';
CREATE DATABASE k3s;
GRANT ALL PRIVILEGES ON DATABASE k3s TO k3s;

Next install k3s on your master node:

curl -sfL https://get.k3s.io | sh -s - --datastore-endpoint 'postgres://$USERNAME:$PASSWORD@ip.add.ress:5432/k3s?sslmode=disable' --write-kubeconfig-mode 644 --docker --disable traefik --disable servicelb

This will install k3s in master node mode, uses Docker instead of containerd, and disables Traefik and the service load balancer.

Grab your token which will be needed to set up the worker nodes. You can find the token at /var/lib/rancher/k3s/server/node-token.

On your work nodes, get k3s installed in agent mode by using these commands:

export K3S_URL=https://master-node-ip-address-or-url:6443
export K3S_TOKEN=K1009809sad1cf2317376e1fc892a7f48983939442479i987sa89ds::server:e28d3875948350349283927498324
curl -fsL https://get.k3s.io | K3S_URL=$K3S_URL K3S_TOKEN=$K3S_TOKEN sh -s agent --docker --disable traefik --disable servicelb

This will set your master node URL and token to a variable and then utilize those variables to install k3s. You'll notice I specify -s agent which tells the install to install k3s in agent mode. Again I disable Traefik and the service load balancer. Given that the GitHub runners don't need to get incoming traffic I found that have Traefik and the service load balancer unnecessary.

If everything went well, you can run kubectl get nodes from your master node and it should show your 3 nodes:

jimmy@kubemaster-runners-octocat-ninja:~$ kubectl get nodes
NAME                               STATUS   ROLES    AGE     VERSION
kubemaster-runners-octocat-ninja   Ready    master   6d17h   v1.18.9+k3s1
kubenode1-runners-octocat-ninja    Ready    <none>   6d17h   v1.18.9+k3s1
kubenode2-runners-octocat-ninja    Ready    <none>   6d16h   v1.18.9+k3s1

I also like to run this command to ensure that no jobs are scheduled on my master node, it's not required though:

kubectl taint node $masterNode k3s-controlplane=true:NoSchedule

This part is also not required but I'm a Kubernetes newbie so having a GUI is helpful. First install helm3:

curl -O https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
bash ./get-helm-3 

You can confirm your helm version by using helm version. Next we need to add the Rancher charts repository:

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

This will install the stable version of the charts, but you can do latest as well. Next create a namespace for Rancher:

kubectl create namespace cattle-system

Next we'll install Rancher using this command:

helm install rancher rancher-latest/rancher \
    --namespace cattle-system \
    --set hostname=rancher.octocat.ninja \
    --set tls=external

You can use kubectl -n cattle-system rollout status deploy/rancher to keep an eye on the deployment. I think it took ~2 minutes, though probably less for it to install for me. Once that is done, I assigned an external IP to the rancher service:

kubectl patch svc rancher -p '{"spec":{"externalIPs":["192.168.1.5"]}}' -n cattle-system

Now you'll obviously want to make sure whatever IP you assign is routed to the system. Next if you have a domain pointing to the system you can use that to access Rancher or you can use the IP. Once you're in Rancher, I recommend creating a new project, I made one called 'GitHub Runners'. Next create a new namespace called docker-in-docker. You can do this from the command line or from within Rancher.

kubectl create ns docker-in-docker

If you did it on the command line, you can use Rancher to move the new namespace into your Project. Here's what my project looks like (don't worry about the other namespace for now):

Rancher - GitHub Runners Project

Next we're going to create a PersistentVolumeClaim. This can be done on the command line or in Rancher. I opted to go the Rancher route since it was easier. From the Projects/Namespaces page, click on the title of the project:

Rancher - GitHub Runners - Click Project Title

From this page click on the 'Import YAML' button:

Rancher - GitHub Runners - Click Import YAML

Paste in the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dind
  namespace: docker-in-docker
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi

Make sure you've selected the 'Namespace: Import all resources into a specific namespace' radio button, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

Rancher - Github Runners - Import YAML

You can adjust the storage size to whatever you feel comfortable with. As given though it will allow 50Gi of space for your Docker in Docker pod. You can always enter into the container to clear out unused Docker images and such.

Hit the Import button! On the 'Volumes' tab you should now see your volume!

Rancher - GitHub Runners - Volumes

Next we'll create a deployment for Docker in Docker. Again, I used the 'Import YAML' button for this. Make sure you have the 'Namespace: Import all resources into a specific namespace' radio button checked, and that your 'docker-in-docker' namespace is selected from the dropdown menu.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dind
  namespace: docker-in-docker
spec:
  replicas: 1
  selector:
    matchLabels:
      workload: deployment-docker-in-docker-dind
  template:
    metadata:
      labels:
        workload: deployment-docker-in-docker-dind
    spec:
      containers:
      - command:
        - dockerd
        - --host=unix:///var/run/docker.sock
        - --host=tcp://0.0.0.0:2376
        env:
        - name: DOCKER_TLS_CERTDIR
        image: docker:19.03.12-dind
        imagePullPolicy: IfNotPresent
        name: dind
        resources: {}
        securityContext:
          privileged: true
          readOnlyRootFilesystem: false
        stdin: true
        tty: true
        volumeMounts:
        - mountPath: /var/lib/docker
          name: dind-storage
      volumes:
      - name: dind-storage
        persistentVolumeClaim:
          claimName: dind

In a nutshell this will setup a pod with a container that runs the Docker in Docker image. It tells the dockerd daemon inside the container where to put the socket file and to listen on TCP 0.0.0.0 on port 2376. Also by specifying DOCKER_TLS_CERTDIR as an empty environment variable we tell it not to use TLS. I along with the author from the blog article have not specified any resources. As this server pretty much only handles my GitHub Runners and one other small Kubernetes cluster I didn't feel the need to constrain my pods. You're more than welcome to set up resources, but it's not something I cover here. At the bottom of the above YAML you'll notice I also specify my persistent volume claim I previously made. This allows this deployment to utilize that volume. Hit Import and you should see your deployment show up in the Rancher interface!

Rancher - GitHub Runners - DIND Deployments

Next I build a Docker image which contained the GitHub Runner application itself. You can use the original blog authors Docker image, or you can build one yourself and deploy it to your own private registry or Docker Hub. My Dockerfile is as follows:

FROM debian:buster-slim

ENV GITHUB_PAT ""
ENV GITHUB_OWNER ""
ENV GITHUB_REPOSITORY ""
ENV RUNNER_WORKDIR "_work"
ENV RUNNER_LABELS ""

RUN apt-get update \
    && apt-get install -y \
        curl \
        sudo \
        git \
        jq \
        iputils-ping \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* \
    && useradd -m github \
    && usermod -aG sudo github \
    && echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers \
    && curl https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz --output docker-19.03.9.tgz \
    && tar xvfz docker-19.03.9.tgz \
    && cp docker/* /usr/bin/

USER github
WORKDIR /home/github

RUN GITHUB_RUNNER_VERSION=$(curl --silent "https://api.github.com/repos/actions/runner/releases/latest" | jq -r '.tag_name[1:]') \
    && curl -Ls https://github.com/actions/runner/releases/download/v${GITHUB_RUNNER_VERSION}/actions-runner-linux-x64-${GITHUB_RUNNER_VERSION}.tar.gz | tar xz \
    && sudo ./bin/installdependencies.sh

COPY --chown=github:github entrypoint.sh ./entrypoint.sh
RUN sudo chmod u+x ./entrypoint.sh

ENTRYPOINT ["/home/github/entrypoint.sh"]

A couple things to note here. I also install Docker since we'll be using this to build and publish our own Docker images via GitHub actions. Also note that this should automatically fetch the latest version of the GitHub Runners and use them. I believe the runner daemon itself checks for updates every few days. I had to modify my entrypoint.sh slightly from the default since I am using GitHub Enterprise. Once my image was built, I pushed it to my private registry server.

Next we'll create a new namespace for our runners. This can be done on the command line via:

kubectl create ns github-actions

Again, I recommend putting this new namespace in your GitHub Runners project in Rancher. Organization is awesome! Once you've done that we'll need to create a new deployment for the runner(s)! I again utilized Rancher and the wonderful 'Import YAML' button to do this. This time however, make sure under the 'Namespace' dropdown menu that you select the 'github-actions' option. Make sure you set the right Docker image as well (image: repository/github-actions-runner:latest is just a place-holder below)!

apiVersion: apps/v1
kind: Deployment
metadata:
  name: github-runner
  namespace: github-actions
  labels:
    app: github-runner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: github-runner
  template:
    metadata:
      labels:
        app: github-runner
    spec:
      containers:
      - name: github-runner
        image: repository/github-actions-runner:latest
        env:
        - name: DOCKER_HOST
          value: tcp://dind.docker-in-docker:2376
        - name: GITHUB_OWNER
          value: $GITHUB_USERNAME
        - name: GITHUB_REPOSITORY
          value: $GITHUB_REPOSITORY_NAME
        - name: GITHUB_PAT
          valueFrom:
            secretKeyRef:
              name: github-actions-token
              key: pat

Replace $GITHUB_USERNAME and $GITHUB_REPOSITORY_NAME with your information.

Create a Personal Access Token for yourself within GitHub. This option can be found at Settings > Developer Settings > Personal Access Tokens. I just checked off 'Repo' (which also select it's sub-options too), then click Generate Token.

GitHub - Personal Access Token

You'll get a string of characters which is your token. Copy this, and we'll use it to create a secret within Kubernetes. You can use the Rancher UI to do this, with our favorite 'Import YAML' button! Make sure the 'github-actions' namespace is selected!

apiVersion: v1
stringData:
  pat: $YOUR_GITHUB_PERSONAL_ACCESS_TOKEN
kind: Secret
metadata:
  name: github-actions-token
  namespace: github-actions
type: Opaque

Once you're done your new deployment should show up in the 'github-actions' namespace area!

Rancher - GitHub Runners - Project

The runner should also automatically show up under your repositories settings > action page!

GitHub > Settings > Actions

I've setup 4-5 runners for the time being, but I know I will have a lot more for my other projects! One thing I do wish was that runners weren't repository specific, or that they could just be deployed whenever an Action called for them. It's seems kind of silly to have to have at least one dedicated runner per repository. You'd think a runner could handle many repositories. For the time being though, this is an excellent solution for self-hosters who use GitHub Actions!

Resources

Upgrading My 2009 MacPro 5,1 to macOS Catalina 10.15.7

Upgrading My 2009 MacPro 5,1 to macOS Catalina

2009 MacPro with macOS Catalina

The other day I finally had a chance to look back into updating my 2009 MacPro to macOS Catalina. When I had done some research previously it appeared that it wouldn't be possible. To my excitement it seems they have figured out how to get it working though!

I would highly recommend this guide if you're looking to get your 2009 MacPro running macOS Catalina.

Of note, I had an older BootROM firmware (138.0.0.0.0) so I did have to get my MacPro updated to 144.0.0.0.0 before I could proceed. Thankfully it was super easy. I just had to snag the macOS Mojave installer which allowed me to update my system. You simply download it and open it to which it should advise you that a firmware update is needed.

Once I had my firmware updated, I went back to the OpenCore on the Mac Pro guide. One thing that frustrated me a little bit was that this required two disks which meant I would be starting fresh which I really didn't want to. It also meant that I would be moving to a spinning disk as I didn't have any spare SSDs. However, in Part I, Step 4 of the guide, instead of selecting the blank drive, I selected my SSD instead. I figured it would give me an error if it wouldn't work there. Thankfully, no errors popped up or warnings that said macOS Catalina couldn't be installed in the selected location. About 15-20 minutes later it finished and proceeded to reboot off my SSD where macOS Mojave had been installed and where all my files and stuff was to macOS Catalina!

Once that was done I went back to Steps 2 and 3 and performed those actions on my SSD. This effectively installed OpenCore to my SSD instead and reset my SSD as the main boot device. I stupidly followed the 'Toggle the VMM flag' instructions on Step 5 which I shouldn't have done until after I updated at 10.15.7 (it looks like the base install of macOS Catalina started me off at 10.15.6), so I did have to go back and untoggle the VMM flag again.

Under Part II of the guide, I did not do 'Making External Drives Internal', or 'Enabling the Graphical Boot Picker' steps. I figured they weren't that important (for now).

One other side note is that my CPU shows up as an Intel Core i3 in the About This Mac window. This may be due the 'Hybridization' step in the guide, but I am not 100% positive. I believe at one point my CPU did show properly in macOS Catalina. It's not a big issue, just cosmetic.

I'm quite excited that I have the latest version of macOS running on my 2009 MacPro, it's brought and expanded more life into the system. I am planning a CPU upgrade (Intel Xeon X5690) as well as I'd like to get 64GBs of RAM in it. Perhaps at that point it could even become my daily driver!

I'll be back with another post once I get some new hardware!

Resources

Restoring a Disk Image and Creating a Bootable Thumb Drive in macOS 10.15.x

I wanted to quick share this with everyone. I found it worked over using Disk Utility to restore a .dmg disk image to a USB thumb drive.

sudo /usr/sbin/asr --noverify --erase --source source --target target

Or for example:

sudo /usr/sbin/asr --noverify --erase --source /Users/Shared/yosemite.dmg --target /Volumes/Untitled

You might also find this useful to ensure it's a bootable drive as well:

sudo bless --mount /Volumes/TheVolume --setBoot

Resources

Hosting Your Own Ghostbin (Spectre) in 2020

A Ghostbin (Spectre) installation doesn't really require a lot of resources. I am currently running it on a system with 4x 2.26GHz CPUs, 8GBs of RAM and a 120GB disk. I've done it on much less though.

Installing Ghostbin (Spectre)

  1. Install your operating system. I used Ubuntu 20.04 Server.

  2. Install Go:

    cd /usr/local
    wget https://dl.google.com/go/go1.14.linux-amd64.tar.gz
    tar -C /usr/local -xzf go1.14.linux-amd64.tar.gz

    Add to the bottom of your /etc/profile file with:

    export PATH=$PATH:/usr/local/go/bin

    You can either also run that at your command prompt or logout and log back in.

  3. Install Mercurial and Python Pygments:

    apt install mercurial python3-pygments
  4. Install ansi2html:

    apt install python-pip
    pip install ansi2html
  5. Install Git, I like to compile it myself so I get the latest version:

    cd /usr/local/src
    apt install autoconf libssl-dev zlib1g-dev libcurl4-openssl-dev tcl-dev gettext
    wget https://github.com/git/git/archive/v2.22.1.tar.gz
    tar zxvf v2.22.1.tar.gz
    cd git-2.22.1/
    make configure
    ./configure --prefix=/usr
    make -j2
    make install
  6. I recommend creating a new user to run your GhostBin code under:

    adduser ghostbin
  7. You should also set a password on the new user account using passwd ghostbin.

  8. Login as your new user account and add the following to your ~/.bashrc file:

    export GOPATH=$HOME/go
  9. Save and exit the file and run source ~/.bashrc.

  10. Next obtain the source code for GhostBin (login as your new user first):

    mkdir -p ~/go/src
    cd $HOME/go/src
    mkdir github.com
    cd github.com
    git clone https://github.com/DHowett/spectre.git
    cd spectre/
  11. At this point your full path should be something like - /home/ghostbin/go/src/github.com/spectre.

  12. Run go get.

  13. Run go build.

  14. Run which pygmentize. It should return /usr/bin/pygmentize. If not, no problem, just copy the path.

  15. You'll also want to run which ansi2html which should return /usr/local/bin/ansi2html. Again, if it doesn't no big deal, just copy the path.

  16. Update the languages.yml file with the paths for pygmentize which should be on line 6. Also update the path for ansi2html which should be on line 23. Save and exit. Here's my languages.yml up to line 25 to give you an example:

    formatters:
      default:
        name: default
        func: commandFormatter
        args:
        - /usr/bin/pygmentize
        - "-f"
        - html
        - "-l"
        - "%LANG%"
        - "-O"
        - "nowrap=True,encoding=utf-8"
      text:
        name: text
        func: plainText
      markdown:
        name: markdown
        func: markdown
      ansi:
        name: ansi
        func: commandFormatter
        args:
        - /usr/local/bin/ansi2html
        - "--naked"
      iphonesyslog:
  17. Next we'll need to build a CSS file which will give color to the pastes:

    pygmentize -f html -S $STYLE > public/css/theme-pygments.css

    You can choose from several styles/color themes:

    - monokai
    - manni
    - rrt
    - perldoc
    - borland
    - colorful
    - default
    - murphy
    - vs
    - trac
    - tango
    - fruity
    - autumn
    - bw
    - emacs
    - vim
    - pastie
    - friendly
    - native

    I used monokai:

    pygmentize -f html -S monokai > public/css/theme-pygments.css
  18. If all went well all that is left to do is to start the service:

    ./ghostbin
  19. Here's an screenshot of my install:
    Ghostbin Installation

  20. I would also recommend running the binary with the --help flag:

    $ ./ghostbin --help
    Usage of ./ghostbin:
    -addr string
        bind address and port (default "0.0.0.0:8080")
    -alsologtostderr
        log to standard error as well as files
    -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
    -log_dir string
        If non-empty, write log files in this directory
    -logtostderr
        log to standard error instead of files
    -rebuild
        rebuild all templates for each request
    -root string
        path to generated file storage (default "./")
    -stderrthreshold value
        logs at or above this threshold go to stderr
    -v value
        log level for V logs
    -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging

    This allows you to see flags you can run with the binary. I run mine as such:

    ./ghostbin -logtostderr

    This will just log the errors to the screen.

Setting Up GhostBin w/ Nginx

  1. Install Nginx:

    apt install nginx
  2. Create a Nginx configuration file for GhostBin:

    nano /etc/nginx/sites-available/ghostbin.conf
    # Upstream configuration
    upstream ghostbin_upstream {  
        server 0.0.0.0:8080;
        keepalive 64;
    }
    
    # Public
    server {  
        listen 80;
        server_name ghostbin.YOURDOMAIN.com; # domain of my site
    
        location / {
            proxy_http_version 1.1;
            proxy_set_header   X-Real-IP        $remote_addr;
            proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_set_header   X-NginX-Proxy    true;
            proxy_set_header   Host             $http_host;
            proxy_set_header   Upgrade          $http_upgrade;
            proxy_redirect     off;
            proxy_pass         http://ghostbin_upstream;
        }
    }

    You'll obviously want to update the server_name bit. Save and exit the file.

  3. Next we need to make a symlink so Nginx knows to load the configuration:

    cd ../site-enabled
    ln -s ../sites-available/ghostbin.conf .
  4. Restart the Nginx service:

    systemctl restart nginx

    Your GhostBin site should now be available at http://ghostbin.YOURDOMAIN.com!

Notes

  • Whenever I start up the binary I see:

    E0815 21:19:58.915843   19895 main.go:773] Expirator Error: open expiry.gob: no such file or directory

    This doesn't appear to be an issue and I haven't had any issues with using GhostBin thus far. There seems to be some code in main.go referencing it. It looks related to the expiration of the paste, but I don't know GoLang so I can't be sure.

    pasteExpirator = gotimeout.NewExpirator(filepath.Join(arguments.root, "expiry.gob"), &ExpiringPasteStore{pasteStore})
  • It looks like the GhostBin repository has been renamed to 'spectre'. It looks like this was done to "de-brand" it for people who want to run it themselves and separate it from GhostBin.com where I believe the developer run their own copy. See this commit.

  • You should definitely set up your install with Let's Encrypt for SSL.

  • It seems like the binary was renamed back to ghostbin from spectre. Why? I don't know. I also noticed there is a binary in /home/ghostbin/go/bin/ but it doesn't seem to work? 🤷🏼‍♂️

Setting Up a Pleroma Instance in Docker

Looking to set up your own Pleroma instance? This guide should walk you through everything you need to do to make it happen. Of note, this assumes you have some familiarity with Docker and working with it, PostgreSQL and Linux. I also utilize Traefik to handle proxying requests.

First, most of this guide is the same thing that can be found here with a few changes. The reason I don't just tell you to go to that guide is that I have an already existing PostgreSQL installation along with a pre-existing network setup in my Docker stack.

docker-compose.yml

To start with this is my docker-compose.yml file:

services:
  pleroma:
    build: .
    image: pleroma
    container_name: pleroma
    hostname: pleroma.mydomain.com
    environment:
      - TZ=${TZ}
      - UID=${PUID}
      - GID=${PGID}
    ports:
      - 4001:4000
    networks:
      static:
        ipv4_address: 172.18.0.34
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/pleroma/uploads:/pleroma/uploads
    depends_on:
      - postgres
    restart: unless-stopped
    cap_add:
      - SYS_PTRACE
  postgres:
    image: postgres:9.6
    container_name: postgres
    hostname: postgres.mydomain.com
    environment:
      - PGID=${PGID}
      - PUID=${PUID}
      - TZ=${TZ}
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/postgresql/pg_data:/var/lib/postgresql/data
      - ${DOCKERCONFDIR}/postgresql/root:/root
    ports:
      - 5432:5432
    networks:
      static:
          ipv4_address: 172.18.0.14
    restart: unless-stopped
    cap_add:
      - SYS_PTRACE
    networks:
      static:
      driver: bridge
      ipam:
        config:
          - subnet: 172.18.0.0/16
          gateway: 172.18.0.1
version: "2.4"

While this isn't the full configuration file, these are the parts which allow Pleroma to function. As noted above I already have a PostgreSQL instance and a pre-existing network created for my Docker stack. You should also note that I use some variables here, mostly in the environment and volumes sections for each container. You can use them too, or swap them out for their "real" values. I'd recommend using the variables with an .env file, but it's up to you.

PostgreSQL Setup

The first thing I do is create a new PostgreSQL user for Pleroma:

psql -U superuser -h localhost -p 5432

You'll want to change out 'superuser' for a user which can create users within PostgreSQL. The follow will create a database for Pleroma, a PostgreSQL user and allow access to the new database for that user.

CREATE DATABASE pleroma;
CREATE USER pleroma WITH ENCRYPTED PASSWORD 'PASSWORD-HERE';
GRANT ALL PRIVILEGES ON DATABASE pleroma TO pleroma;

It appears the database setup processes creates an EXTENSION, so you'll need to provide your PostgreSQL user with the superuser permission. You can do so by running the following:

ALTER USER pleroma WITH SUPERUSER;

File System Setup

Once setting up PostgreSQL has been completed, you'll want to setup your uploads folder. I've set mine up at /home/jimmy/.docker/config/pleroma/uploads. Next, I setup a folder where I will build the Pleroma Docker image. I've done this at /home/jimmy/.docker/builds/pleroma. Within that directory, I create a new Dockerfile and place the follow contents into it:

FROM elixir:1.9-alpine

ENV UID=911 GID=911 \
    MIX_ENV=prod

ARG PLEROMA_VER=develop

RUN apk -U upgrade \
    && apk add --no-cache \
       build-base \
       git

RUN addgroup -g ${GID} pleroma \
    && adduser -h /pleroma -s /bin/sh -D -G pleroma -u ${UID} pleroma

USER pleroma
WORKDIR pleroma

RUN git clone -b develop https://git.pleroma.social/pleroma/pleroma.git /pleroma \
    && git checkout ${PLEROMA_VER}

COPY config/secret.exs /pleroma/config/prod.secret.exs

RUN mix local.rebar --force \
    && mix local.hex --force \
    && mix deps.get \
    && mix compile

VOLUME /pleroma/uploads/

CMD ["mix", "phx.server"]

This Dockerfile is different than the one provided by the above linked GitHub respository. The difference is the first line. I am utilizing a newer version of elixer which is required. If you do not use this you will likely see the following error in your logs when trying to startup your Pleroma instance:

15:38:00.284 [info] Application pleroma exited: exited in: Pleroma.Application.start(:normal, [])
    ** (EXIT) an exception was raised:
        ** (RuntimeError) 
            !!!OTP VERSION WARNING!!!
            You are using gun adapter with OTP version 21.3.8.15, which doesn't support correct handling of unordered certificates chains. Please update your Erlang/OTP to at least 22.2.
            (pleroma) lib/pleroma/application.ex:57: Pleroma.Application.start/2
            (kernel) application_master.erl:277: :application_master.start_it_old/4
15:38:22.720 [info]  SIGTERM received - shutting down

Next create a config folder within your builds/pleroma directory. For example, my full path is /home/jimmy/.docker/builds/pleroma/config. Within there create a file called secret.exs. Open this file in your favorite text editor and paste in the following:

use Mix.Config

config :pleroma, Pleroma.Web.Endpoint,
   http: [ ip: {0, 0, 0, 0}, ],
   url: [host: "pleroma.domain.tld", scheme: "https", port: 443],
   secret_key_base: "<use 'openssl rand -base64 48' to generate a key>"

config :pleroma, :instance,
  name: "Pleroma",
  email: "admin@email.tld",
  limit: 5000,
  registrations_open: true

config :pleroma, :media_proxy,
  enabled: false,
  redirect_on_failure: true,
  base_url: "https://cache.domain.tld"

# Configure your database
config :pleroma, Pleroma.Repo,
  adapter: Ecto.Adapters.Postgres,
  username: "pleroma",
  password: "pleroma",
  database: "pleroma",
  hostname: "postgres",
  pool_size: 10

Ensure that you update the host in the url line, the secret_key_base, the name, email, and the database information. Save and exit the file.

Building The Pleroma Docker Image

Alright, so with your Dockerfile and Pleroma configuration files in place we need to build the image! While in the same directory as your Dockerfile, run the following command:

docker build -t pleroma .

This may take a few minutes to complete. Once completed we need to setup the database within PostgreSQL. This is another reason I am putting together this guide, because using the command from the other GitHub repository will not work.

docker run --rm -it --network=main_static pleroma mix ecto.migrate

That will take 30-45 seconds to run. Once completed we need to generate our web push keys. Use the following command in order to do so:

docker run --rm -it --network=main_static pleroma mix web_push.gen.keypair

Copy the output from the above command, and place it at the bottom of your config/secret.exs file. Now we need to rebuild the Pleroma Docker image again with this new configuration, so do so using:

docker build -t pleroma .

You should be all set now! You just need to run docker-compose up -d and it should get everything started up. If all went well you should see something similar to:

Pleroma Instance

Updating Your Pleroma Instance

So you've got your instance up and running but how about keeping it up to date! Fortunately this is relatively easy as well! Just go into the directory with your Pleroma Dockerfile and run the following commands:

docker stop pleroma
docker build --no-cache -t pleroma .
docker run --rm -it --network=main_static pleroma mix ecto.migrate

Now run docker-compose up -d and a new container will be created with your newly built image!

Resources

Installing Pixelfed on Ubuntu 19.10

Prerequisites

This guide assumes:

  • You've already gotten a server setup with Ubuntu 19.10
  • Nginx
  • MySQL 5.6+ (or MariaDB 10.2.7+)
  • PHP 7.2+ with the following extensions bcmath, ctype, curl, exif, iconv, imagick, intl, json, mbstring, mysql, gd, openssl, tokenizer, xml and zip
  • Redis
  • ImageMagick
  • Supervisor
  • Git

I would also recommend installing JPEGOptim, OptiPNG and PNGQuant. These will strip exif data and optimizes jpeg and png photos.

$ apt install jpegoptim
$ apt install optipng
$ apt install pngquant

Here's a list of the packages I have installed:

MariaDB

libmariadb3
mariadb-client-10.4
mariadb-client-core-10.4
mariadb-client
mariadb-common
mariadb-server-10.4
mariadb-server-core-10.4
mariadb-server
mysql-common

ImageMagick

imagemagick-6-common
imagemagick-6.q16
imagemagick
libmagickcore-6.q16-6-extra
libmagickcore-6.q16-6
libmagickwand-6.q16-6
php-imagick

PHP

php-common
php-composer-ca-bundle
php-composer-semver
php-composer-spdx-licenses
php-composer-xdebug-handler
php-igbinary
php-imagick
php-json-schema
php-psr-container
php-psr-log
php-redis
php-symfony-console
php-symfony-filesystem
php-symfony-finder
php-symfony-process
php-symfony-service-contracts
php7.3-bcmath
php7.3-cli
php7.3-common
php7.3-curl
php7.3-fpm
php7.3-gd
php7.3-intl
php7.3-json
php7.3-mbstring
php7.3-mysql
php7.3-opcache
php7.3-readline
php7.3-tidy
php7.3-xml
php7.3-xmlrpc
php7.3-zip

Misc

git/eoan-updates,eoan-security,now 1:2.20.1-2ubuntu1.19.10.1 amd64 [installed,automatic]
composer/eoan,now 1.9.0-2 all [installed]
supervisor/eoan,now 3.3.5-1 all [installed]
libhiredis0.14/eoan,now 0.14.0-3 amd64 [installed,automatic]
php-redis/eoan,now 5.2.1+4.3.0-1+ubuntu19.10.1+deb.sury.org+1 amd64 [installed]
redis-server/eoan,now 5:5.0.5-2build1 amd64 [installed]
redis-tools/eoan,now 5:5.0.5-2build1 amd64 [installed,automatic]
sendmail-base/eoan,now 8.15.2-13 all [installed,automatic]
sendmail-bin/eoan,now 8.15.2-13 amd64 [installed,automatic]
sendmail-cf/eoan,now 8.15.2-13 all [installed,automatic]
sendmail/eoan,now 8.15.2-13 all [installed]

Installation

First create a new user:

$ adduser pixelfed

Login as that user via SSH.

Now, clone the Pixelfed repository, move into it and use composer to install necessary packages:

$ cd ~
$ git clone https://github.com/pixelfed/pixelfed
$ cd pixelfed
$ composer install

Next we'll need to create a configuration file for the site:

$ cp .env.example .env
$ php artisan key:generate

Next open the .env file in your editor of choice and update it so it matches your configuration. Of note, I utilized Mailgun for my email service, however I could not get it working using the 'mailgun' driver. I had to use the 'smtp' driver with SMTP credentials I created from the Mailgun dashboard. Here's what I have for my mail configuration:

MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailgun.org
MAIL_PORT=587
MAIL_USERNAME="pixelfed@mydomain.com"
MAIL_PASSWORD="password-here"
MAIL_ADDRESS="pixelfed@mydomain.com"
MAIL_FROM_ADDRESS="pixelfed@mydomain.com"
MAIL_FROM_NAME="Pixelfed"
MAIL_ENCRYPTION=tls

Once you've updated your .env file, save it. Next we need to get the database setup. This is how I created my MySQL user and database.

CREATE DATABASE pixelfed CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'pixelfed'@'localhost' IDENTIFIED BY 'PASSWORD';
GRANT ALL PRIVILEGES ON pixelfed.* TO 'pixelfed'@'localhost';

Run the command php artisan migrate. This will create the necessary structure and stuff in the MySQL database. Make sure to answer 'yes' to any prompt(s) that come up. It may take a minute or two to complete.

Next run the following commands (especially if you're running a production environment):

$ php artisan config:cache
$ php artisan route:cache
$ php artisan view:cache

Next up is install Horizon. This will manage jobs that the application has (things like resizing images, deleting images, etc...):

php artisan horizon:install

Next we'll want to create a supervisor configuration for Horizon. I do this as my root user. Create the file /etc/supervisor/conf.d/horizon.conf and place the following inside:

[program:horizon]
process_name=%(program_name)s
command=php /home/pixelfed/pixelfed/artisan horizon
autostart=true
autorestart=true
user=pixelfed
redirect_stderr=true
stdout_logfile=/home/pixelfed/pixelfed/horizon.log
stopwaitsecs=3600

I recommend running the following commands (it will restart and enable the service):

$ systemctl enable supervisor
$ systemctl restart supervisor

If you run a ps auxfwww you should see something similar:

root     18629  0.0  0.2  27672 20696 ?        Ss   12:46   0:01 /usr/bin/python2 /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
pixelfed 18651  0.1  0.7 136724 57568 ?        S    12:46   0:04  \_ php /home/pixelfed/pixelfed/artisan horizon
pixelfed 18658  0.1  0.7 136724 57844 ?        S    12:46   0:05      \_ /usr/bin/php7.3 artisan horizon:supervisor hostname-tixv:supervisor-1 redis --delay=0 --memory=128 --queue=high,default,feed --sleep=3 --timeout=60 --tries=3 --balance=auto --max-processes=20 --min-processes=1 --nice=0
pixelfed 18672  0.0  0.6 134676 56724 ?        S    12:46   0:01          \_ /usr/bin/php7.3 artisan horizon:work redis --delay=0 --memory=128 --queue=high --sleep=3 --timeout=60 --tries=3 --supervisor=hostname-tixv:supervisor-1
pixelfed 18677  0.0  0.7 138968 58944 ?        S    12:46   0:01          \_ /usr/bin/php7.3 artisan horizon:work redis --delay=0 --memory=128 --queue=default --sleep=3 --timeout=60 --tries=3 --supervisor=hostname-tixv:supervisor-1
pixelfed 18682  0.0  0.6 134676 56192 ?        S    12:46   0:01          \_ /usr/bin/php7.3 artisan horizon:work redis --delay=0 --memory=128 --queue=feed --sleep=3 --timeout=60 --tries=3 --supervisor=hostname-tixv:supervisor-1

Next (and you can do this as your root user or pixelfed user), add a cronjob to the pixelfed user:

* * * * * cd /home/pixelfed/pixelfed && php artisan schedule:run >> /dev/null 2>&1

You can read this for more information on what sorts of things are done during this scheduled task.

Post-Installation Tasks

Import Cities

Run the command php artisan import:cities to import cities into your database. This will allow users to select locations when they're posting content.

Configure Nginx

Create a file - /etc/nginx/sites-available/mydomain.com.conf with the following contents:

server {
    server_name mydomain.com;
    listen 80;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name mydomain.com;
    root /home/pixelfed/pixelfed/public;

    ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options "nosniff";

    client_max_body_size 20m;
    client_body_buffer_size 128k;

    index index.html index.htm index.php;

    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page 404 /index.php;

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/run/php/php7.3-pixelfed-fpm.sock;
        fastcgi_index index.php;
        include fastcgi.conf;
    }

    location ~ /\.(?!well-known).* {
        deny all;
    }
}

Of course you'll want to swap out mydomain.com in the file name and file contents with your domain.

Next, we want to create a PHP-FPM pool just for our pixelfed user. So create a file /etc/php/7.3/fpm/pool.d/pixelfed.photos.conf and place the following inside of it:

[pixelfed]
user = pixelfed
group = pixelfed
listen = /run/php/php7.3-pixelfed-fpm.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic 
pm.max_children = 75 
pm.start_servers = 10 
pm.min_spare_servers = 5 
pm.max_spare_servers = 20 
pm.process_idle_timeout = 10s

Now, we're going to run these commands to enable Nginx and PHP-FPM and restart them:

$ systemctl enable nginx
$ systemctl restart nginx
$ systemctl enable php7.3-fpm
$ systemctl restart php7.3-fpm

You should now have a running system! Go ahead and visit your site, and sign up for a new account. If you've enabled email verification make sure you complete that. Once you're verified go back into SSH as the pixelfed user and into /home/pixelfed/pixelfed and run php artisan user:admin username_here. This will turn your account into an admin account. Once that is done you can visit the admin area of your site (it's linked in the dropdown menu in the upper right corner) as well as see the Horizon dashboard. It should show as Active.

Pixelfed Horizon Dashboard

I also recommend running php artisan storage:link.

Resources

The follow is several resources I used and should help you as well.

The Unofficial Gitpod Installation via Docker Container Guide

As of December 1, 2021 single machine deployments of Gitpod (such as below) are not officially supported!

This guide will show you how to fetch a Let's Encrypt SSL certificate, run Gitpod as a Docker container using Docker Compose and configure it so it works with Traefik v2.x.

Previously in order to run Gitpod you needed to use either Google Cloud Platform which I found to be prohibitively expensive or use your own vanilla Kubernetes setup. Going the vanilla Kubernetes route was fun and a great learning experience but it required running another server that used more eletricity and generated more heat to be running in my house. Thankfully Gitpod can now be run in as a single Docker container!

The information and directions I found were pretty good and got me started however there was some changes in how I deployed it. I figured it would be worth sharing my expenience in running Gitpod via a single Docker container with you all.

For what it's worth I deployed this on a Dell R620 with 2x Intel Xeon E5-2640s, 220GBs RAM, a 500GB SSD boot drive and 5.6TB RAID5 for data. Obviously you probably don't need all that if you're just running Gitpod but I am running about ~70 other Docker containers with other services.

First clone the following Git repository as such:

git clone https://github.com/wilj/gitpod-docker-k3s.git

This was a great starting point for me running Gitpod via Docker, however there was a few files I had to update. The first one was the create-certs.sh file. I use NS1 to manage the DNS for my Gitpod domain. My create-certs.sh file looks like this:

#!/bin/bash
set -euox

EMAIL=$1
WORKDIR=$(pwd)/tmpcerts

mkdir -p $WORKDIR

sudo docker run -it --rm --name certbot \
    -v $WORKDIR/etc:/etc/letsencrypt \
    -v $WORKDIR/var:/var/lib/letsencrypt \
    -v $(pwd)/secrets/nsone.ini:/etc/nsone.ini:ro \
        certbot/dns-nsone certonly \
            -v \
            --agree-tos --no-eff-email \
            --email $EMAIL \
            --dns-nsone \
            --dns-nsone-credentials /etc/nsone.ini \
            --dns-nsone-propagation-seconds 30 \
            -d mydomain.com \
            -d \*.mydomain.com \
            -d \*.ws.mydomain.com

sudo find $WORKDIR/etc/live -name "*.pem" -exec sudo cp -v {} $(pwd)/certs \;
sudo chown -Rv $USER:$USER $(pwd)/certs
chmod -Rv 700 $(pwd)/certs

sudo rm -rfv $WORKDIR

openssl dhparam -out $(pwd)/certs/dhparams.pem 2048

You'll see I adjust the top bit to get rid of the DOMAIN variable and I swapped out the instances where it was used ($DOMAIN) to my actual domain. I also had to escape the * being used in the docker run command. I did this because the script wouldn't execute properly with the askerisks in place.

Next you'll likely need to create a secrets file to store your API key for the DNS manager you're using, in my case NS1. I created this in the secrets/ directory as nsone.ini. It looks something like this (obviously the key is made up here):

# NS1 API credentials used by Certbot
dns_nsone_api_key = dhsjkas8d7sd7f7s099n

Now we can generate our certificates by running this script. The script will create (and remove once done) the necessary DNS records and fetch an SSL certificate from Let's Encrypt.

./create-certs.sh <email>

Once this is done, you'll see the SSL certificate and files in your ./certs directory. There should be five (5) files:

cert.pem
chain.pem
dhparams.pem
fullchain.pem
privkey.pem

Now you can either run the setup.sh file as such:

./setup.sh <domain> <dns server>

or setup a new service in a docker-compose.yml file. The following is what I have in my Docker compose file, however I recommend you review it carefully so it works in your environment, don't just copy and paste.

  gitpod:
    image: eu.gcr.io/gitpod-core-dev/build/gitpod-k3s:latest
    container_name: gitpod
    labels:
      - traefik.http.routers.gitpod.rule=Host(`domain.com`) || HostRegexp(`domain.com`,`{subdomain:[A-Za-z0-9]+}.domain.com`,`{subdomain:[A-Za-z0-9-_]+}.ws.domain.com`)
      - traefik.http.routers.gitpod.entrypoints=websecure
      - traefik.http.routers.gitpod.service=gitpod
      - traefik.http.routers.gitpod.tls=true
      - traefik.http.services.gitpod.loadbalancer.server.port=443
      - traefik.http.services.gitpod.loadbalancer.server.scheme=https
    environment:
      - DOMAIN=domain.com
      - DNSSERVER=8.8.8.8
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /run/containerd/containerd.sock:/run/containerd/containerd.sock
      - ${DOCKER_CONF_DIR}/gitpod/values:/values
      - ${DOCKER_CONF_DIR}/gitpod/certs:/certs
      - gitpod-docker:/var/gitpod/docker
      - gitpod-docker-registry:/var/gitpod/docker-registry
      - gitpod-minio:/var/gitpod/minio
      - gitpod-mysql:/var/gitpod/mysql
      - gitpod-workspaces:/var/gitpod/workspaces
    networks:
      - production
    depends_on:
      - traefik
    restart: unless-stopped
    cap_add:
      - SYS_PTRACE
    volumes:
      gitpod-docker:
        driver: local
      gitpod-docker-registry:
        driver: local
      gitpod-minio:
        driver: local
      gitpod-mysql:
        driver: local
      gitpod-workspaces:
        driver: local
    networks:
      production:
        driver: bridge
        ipam:
          config:
            - subnet: 172.18.0.0/16
            gateway: 172.18.0.1

You'll see I have ${DOCKER_CONF_DIR} there which is an environmental value (stored in my .env file) which points to /mnt/data/docker/config/gitpod/ on my server. You can setup something similar or hardcode it in your docker-compose file. Whatever you do make sure your copy those five (5) SSL files I previously mentioned into a new directory named certs within that directory. So for example I have my SSL certificate files located at /mnt/data/docker/config/gitpod/certs.

You'll also need to create another directory within the gitpod directory called values. So for me this is /mnt/data/docker/config/gitpod/values. Within there we're going to create three (3) more YAML files.

auth-providers.yaml
minio-secrets.yaml
mysql.yaml

In the first file auth-providers.yaml, I have setup GitHub OAuth application details. In order to login to your Gitpod instance you'll need to set this file up. I have a GitHub Enterprise instance that I will be using for authentication. My auth-providers.yamlfile looks like this:

authProviders:
- id: "GitHub"
  host: "githubenterprise.com"
  type: "GitHub"
  oauth:
    clientId: "7d73h3b933829d9"
    clientSecret: "asu8a9sf9h89a9892n2n209201934b8334uhnraf987"
    callBackUrl: "https://gitpod-domain.com/auth/github/callback"
    settingsUrl: "https://githubenterprise.com/settings/connections/applications/7d73h3b933829d9"
  description: ""
  icon: ""

The above is an example and will need to be adjusted accordingly. You can also set it up with GitHub.com, GitLab.com or your own self-hosted GitLab instance.

The next file minio-secrets.yaml needs to contain a username and password that will be used for the MinIO instance that will run within the k3s Kubernetes cluster in the Gitpod Docker container. I used the following command to generate some random strings:

openssl rand -hex 32

I ran that twice, once to create a string for the username and again for the password. Your minio-secrets.yaml should look like this:

minio:
  accessKey: 9d0d6aa1c9d9981fadc103a9e3a5bb56929df51de22439ab1410249c879429b1
  secretKey: 7f0f8ccd7219a1ef87cd30d33751469a491c54df062c8ca28517602576725276

obviously replacing those strings with whatever came from running that openssl command.

Now we need to create the mysql.yaml with the following contents:

db:
  host: mysql
  port: 3306
  password: test

Once that is all set we can start up the Gitpod Docker container! I run:

docker-compose -p main up -d gitpod

You can adjust this command to your environment, for example you may not need the -p main bit. Once you run the above command it'll take a bit for things to get all setup and running. What I do (since I'm impatient :stuck_out_toungue_closed_eyes:) is run this command which will show you the status of the k3s Kubernetes cluster being setup in the Gitpod Docker container:

docker exec gitpod kubectl get all --all-namespaces

You should see output similar too:

NAMESPACE     NAME                                          READY   STATUS      RESTARTS   AGE
kube-system   pod/local-path-provisioner-7c458769fb-4zxlq   1/1     Running     0          13d
kube-system   pod/coredns-854c77959c-pqq94                  1/1     Running     0          13d
kube-system   pod/metrics-server-86cbb8457f-84kjc           1/1     Running     0          13d
default       pod/svclb-proxy-6gxms                         2/2     Running     0          13d
default       pod/blobserve-6bdd97d5dc-dzfnd                1/1     Running     0          13d
default       pod/dashboard-859c9bf868-mhvv7                1/1     Running     0          13d
default       pod/ws-manager-84486dc88c-p9dqt               1/1     Running     0          13d
default       pod/ws-scheduler-ff4d8d9dd-2wf28              1/1     Running     0          13d
default       pod/registry-facade-pvnpn                     1/1     Running     0          13d
default       pod/content-service-656fd85977-dkrvp          1/1     Running     0          13d
default       pod/theia-server-568fb48db5-fhknk             1/1     Running     0          13d
default       pod/registry-65ff9d5744-pxx96                 1/1     Running     0          13d
default       pod/minio-84fcc5d488-zcdj8                    1/1     Running     0          13d
default       pod/ws-proxy-5d5cd8fc64-tp97v                 1/1     Running     0          13d
default       pod/image-builder-7d97c4b4fb-wdb9l            2/2     Running     0          13d
default       pod/proxy-85b684df9b-fvl77                    1/1     Running     0          13d
default       pod/ws-daemon-xdnmg                           1/1     Running     0          13d
default       pod/node-daemon-rn5xs                         1/1     Running     0          13d
default       pod/messagebus-f98948794-gqcqp                1/1     Running     0          13d
default       pod/mysql-7cbb9c9586-l8slq                    1/1     Running     0          13d
default       pod/gitpod-helm-installer                     0/1     Completed   0          13d
default       pod/ws-manager-bridge-69856554ff-wxqw9        1/1     Running     0          13d
default       pod/server-84cf48b766-pt9gp                   1/1     Running     0          13d

NAMESPACE     NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                 AGE
default       service/kubernetes        ClusterIP      10.43.0.1       <none>        443/TCP                                 13d
kube-system   service/kube-dns          ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP                  13d
kube-system   service/metrics-server    ClusterIP      10.43.97.99     <none>        443/TCP                                 13d
default       service/dashboard         ClusterIP      10.43.154.252   <none>        3001/TCP                                13d
default       service/ws-manager        ClusterIP      10.43.49.193    <none>        8080/TCP                                13d
default       service/theia-server      ClusterIP      10.43.200.184   <none>        80/TCP                                  13d
default       service/server            ClusterIP      10.43.138.124   <none>        3000/TCP,9500/TCP                       13d
default       service/ws-proxy          ClusterIP      10.43.245.248   <none>        8080/TCP                                13d
default       service/mysql             ClusterIP      10.43.2.56      <none>        3306/TCP                                13d
default       service/minio             ClusterIP      10.43.243.101   <none>        9000/TCP                                13d
default       service/registry          ClusterIP      10.43.10.110    <none>        443/TCP                                 13d
default       service/registry-facade   ClusterIP      10.43.194.106   <none>        3000/TCP                                13d
default       service/messagebus        ClusterIP      10.43.135.179   <none>        5672/TCP,25672/TCP,4369/TCP,15672/TCP   13d
default       service/blobserve         ClusterIP      10.43.36.3      <none>        4000/TCP                                13d
default       service/content-service   ClusterIP      10.43.112.64    <none>        8080/TCP                                13d
default       service/image-builder     ClusterIP      10.43.241.227   <none>        8080/TCP                                13d
default       service/db                ClusterIP      10.43.34.57     <none>        3306/TCP                                13d
default       service/proxy             LoadBalancer   10.43.184.145   172.18.0.72   80:31895/TCP,443:30753/TCP              13d

NAMESPACE   NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
default     daemonset.apps/svclb-proxy       1         1         1       1            1           <none>          13d
default     daemonset.apps/registry-facade   1         1         1       1            1           <none>          13d
default     daemonset.apps/ws-daemon         1         1         1       1            1           <none>          13d
default     daemonset.apps/node-daemon       1         1         1       1            1           <none>          13d

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           13d
kube-system   deployment.apps/coredns                  1/1     1            1           13d
kube-system   deployment.apps/metrics-server           1/1     1            1           13d
default       deployment.apps/blobserve                1/1     1            1           13d
default       deployment.apps/dashboard                1/1     1            1           13d
default       deployment.apps/ws-manager               1/1     1            1           13d
default       deployment.apps/ws-scheduler             1/1     1            1           13d
default       deployment.apps/content-service          1/1     1            1           13d
default       deployment.apps/theia-server             1/1     1            1           13d
default       deployment.apps/minio                    1/1     1            1           13d
default       deployment.apps/ws-proxy                 1/1     1            1           13d
default       deployment.apps/image-builder            1/1     1            1           13d
default       deployment.apps/proxy                    1/1     1            1           13d
default       deployment.apps/registry                 1/1     1            1           13d
default       deployment.apps/messagebus               1/1     1            1           13d
default       deployment.apps/mysql                    1/1     1            1           13d
default       deployment.apps/ws-manager-bridge        1/1     1            1           13d
default       deployment.apps/server                   1/1     1            1           13d

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/local-path-provisioner-7c458769fb   1         1         1       13d
kube-system   replicaset.apps/coredns-854c77959c                  1         1         1       13d
kube-system   replicaset.apps/metrics-server-86cbb8457f           1         1         1       13d
default       replicaset.apps/blobserve-6bdd97d5dc                1         1         1       13d
default       replicaset.apps/dashboard-859c9bf868                1         1         1       13d
default       replicaset.apps/ws-manager-84486dc88c               1         1         1       13d
default       replicaset.apps/ws-scheduler-ff4d8d9dd              1         1         1       13d
default       replicaset.apps/content-service-656fd85977          1         1         1       13d
default       replicaset.apps/theia-server-568fb48db5             1         1         1       13d
default       replicaset.apps/minio-84fcc5d488                    1         1         1       13d
default       replicaset.apps/ws-proxy-5d5cd8fc64                 1         1         1       13d
default       replicaset.apps/image-builder-7d97c4b4fb            1         1         1       13d
default       replicaset.apps/proxy-85b684df9b                    1         1         1       13d
default       replicaset.apps/registry-65ff9d5744                 1         1         1       13d
default       replicaset.apps/messagebus-f98948794                1         1         1       13d
default       replicaset.apps/mysql-7cbb9c9586                    1         1         1       13d
default       replicaset.apps/ws-manager-bridge-69856554ff        1         1         1       13d
default       replicaset.apps/server-84cf48b766                   1         1         1       13d

A lot of the items will likely read 'Creating' or 'Initializing'. I didn't think ahead enough to grab the output when my instance was actually being setup so the output above is what mine looks like when every thing is done.

If everything went smoothly your output from that command should look like mine from above. You should also be able to visit your Gitpod instance in your web browser.

Gitpod

Assuming you've setup your authentication provider properly you should be able to login and start setting up workspaces!

Gitpod
Gitpod

While everything is working there's a couple things I want to figure out how to do to have a "better" instance of Gitpod running.

  • Move the Docker volumes onto my 5.6TB RAID5. Right now they're sititng on my SSD which I prefer to use for my boot drive and mostly static files.
  • Figure out how to set a better password for MySQL.
  • Figure out how to run a newer version of Gitpod. Right now it's using the latest tag which is version 0.7.0.
  • I seem to have issues redeploying the Docker container while leaving the volumes as-is. This causes me to have to start completely fresh which is no good.

Resources

Creating Your Own Registration/Login Page for CMaNGOS

Requirements

  • PHP 7.1+
  • Web server (ex. Apache or Nginx)
  • CMaNGOS instance

Installation

You can install the libary via composer:

composer require laizerox/php-wowemu-auth

Usage

Registering Accounts

First you'll want to use Composer's autoloader. Place this at the top of your script. This also calls to the library we need.

require_once __DIR__ . '/vendor/autoload.php';
use Laizerox\Wowemu\SRP\UserClient;

Next create the verifier and salt values using the username and password which your user submitted on your registration form.

$client = new UserClient($username);
$salt = $client->generateSalt();
$verifier = $client->generateVerifier($password);

Once that information is generated insert those values into the realmd database to the v and s fields.

Logging In

First, use Composer's autoloader. Place this at the top of your script. As noted above, this also the library.

require_once __DIR__ . '/vendor/autoload.php';
use Laizerox\Wowemu\SRP\UserClient;

Next you'll need to generate your "verifier". Think of this as the hashed version of the password your user put into the password field of you login form.

$client = new UserClient($username, $saltFromDatabase);
$verifier = strtoupper($client->generateVerifier($password));

Next, compare that value with the value stored in your CMaNGOS realmd.account table. You can see below for more of an example.

Examples

Register

This example goes over how a user can register via a web form.

<?php

/* register.php */

require_once __DIR__ . '/vendor/autoload.php';
use Laizerox\Wowemu\SRP\UserClient;

/* Connect to your CMaNGOS database. */
$db = new mysqli($dbHost, $dbUser, $dbPassword, $dbName);

/* If the form has been submitted. */
if (isset($_POST['register'])) {
    $username = $_POST['username'];
    $password = $_POST['password'];

    /* Grab the users IP address. */
    $ip = $_SERVER['REMOTE_ADDR'];

    /* Set the join date. */
    $joinDate = date('Y-m-d H:i:s');

    /* Set GM Level. */
    $gmLevel = '0';

    /* Set expansion pack - Wrath of the Lich King. */
    $expansion = '2';

    /* Create your v and s values. */
    $client = new UserClient($username);
    $salt = $client->generateSalt();
    $verifier = $client->generateVerifier($password);

    /* Insert the data into the CMaNGOS database. */
    mysqli_query($db, "INSERT INTO account (username, v, s, gmlevel, email, joindate, last_ip, expansion) VALUES ('$username', '$verifier', '$salt',  '$gmLevel', '$email', '$joinDate', '$ip', '$expansion')");

    /* Do some stuff to let the user know it was a successful or unsuccessful attempt. */
}    

?>

You'll want to do some error checking and validation, but that'll be left up to you.

The following is a very basic HTML form that can be used for registering an account.

<form action="/register" method="post">
    <input type="text" name="username" placeholder="Username">
    <input type="email" name="email" placeholder="Email Address">
    <input type="password" name="password" placeholder="Password">
    <?php $register = sha1(time()); ?>
    <input type="hidden" name="register" value="<?php echo $register; ?>">
    <button type="submit">Register</button>
</form>

Login

<?php

/* login.php */

require_once __DIR__ . '/vendor/autoload.php';
use Laizerox\Wowemu\SRP\UserClient;

/* Connect to your CMaNGOS database. */
$db = new mysqli($dbHost, $dbUser, $dbPassword, $dbName);

/* Function to get values from MySQL. */
function getMySQLResult($query) {
    global $db;
    return $db->query($query)->fetch_object();
}

/* If the form has been submitted. */
if (isset($_POST['login'])) {
    $username = $_POST['username'];
    $password = $_POST['password'];

    /* Get the salt and verifier from realmd.account for the user. */
    $query = "SELECT s,v FROM account WHERE username='$username'";
    $result = getMySQLResult($query);
    $saltFromDatabase = $result->s;
    $verifierFromDatabase = strtoupper($result->v);

    /* Setup your client and verifier values. */
    $client = new UserClient($username, $saltFromDatabase);
    $verifier = strtoupper($client->generateVerifier($password));

    /* Compare $verifierFromDatabase and $verifier. */
    if ($verifierFromDatabase === $verifier) {
        /* Do your login stuff here, like setting cookies/sessions... */
    }
    else {
        /* Do whatever you wanna do when the login has failed, send a failure message, redirect them to another page, etc... */
    }

?>

Again, you'll want to add in your own error checking and validation but this should get you started. The below is a basic HTML form that can be used for logging in.

<form action="/login" method="post">
    <input type="text" name="username" placeholder="Username">
    <input type="password" name="password" placeholder="Password">
    <?php $login = sha1(time()); ?>
    <input type="hidden" name="login" value="<?php echo $login; ?>">
    <button type="submit">Sign In</button>
</form>

If you find any defects when using the library please open a new issue the Laizerox/php-wowemu-auth repository. If you need further assistance we can try assising you in the #offtopic channel of the CMaNGOS Discord server.

Upgrading a 2009 Mac Pro to macOS 10.13 High Sierra

2010 Mac Pro - High Sierra

Believe it or not the 2009 Mac Pro which outperforms some of the latest Mac models has been blacklisted from having macOS 10.13 High Sierra installed on it. Why? Who knows...But, with a little nerd skills you can get your 2009 Mac Pro updated to the latest version of macOS (which at the time of writing is 10.13 High Sierra).

I previously had gotten 10.12 installed which required a little hackery but was well worth it to get my system running on the latest version of macOS. High Sierra requires a firmware update so that your 2009 model is seen as a 2010 model. Thankfully this is relatively easy to do if you have some spare time on your hands and a little patience. Here's what you're going to need:

I initially had some trouble running the firmware patching tool on my Sierra installation so this I did some further digging and some other folks found that going from Mavericks would work fine. The error I was getting was 'Error 5570'. I utilized the following guide to make a bootable USB of macOS Mavericks. I had previously purchased a copy of Mavericks from the App Store so I just downloaded it using that method.

So once, you've gotten your bootable USB setup, get Mavericks installed into your spare hard drive. Then you'll want to grab the firmware patching tool, and the EFI updater from Apple. I am not positive why this was necessary, but I had to open/mount the EFI updater from Apple, then run the firmware patching tool. It's possible this post here explains why, but I am not sure if that's it. Anyways, get the EFI Updater mounted, then run the firmware patching tool. The only button on the firmware patcher that should be clickable is the 'Upgrade to 2010 Firmware'. Click that and the tool will do its thing. If all went well you should get a little informational popup telling you to shut your Mac Pro down. Follow through on that information. If that works out, you'll hear a long tone while starting up your system and the firmware will be updated!

2010 Mac Pro - High Sierra 2

You can open the System Profiler application and on the page it opens to, you'll see the firmware information. The Boot ROM Version should read 'MP51.007F.B03'. Great! Now shut down the system and put your original hard drive back in. Snag the free macOS 10.13 High Sierra installer from the App Store and run it. It may tell you an additional firmware update is needed. Follow through on that. Once booted back into macOS, you can visit the System Profiler and your Boot ROM Version should read, 'MP51.0084.B00'. The High Sierra installer application should also be already opened. Go ahead and install!

2010 Mac Pro - High Sierra 3

Once all is said and done and you're booted back to your desktop, you should have the latest firmware for the system and you'll be running at least macOS 10.13.2 High Sierra. I have yet to test out updates to the OS via the App Store, but if they've been anything like previous unsupported installs they'll work perfectly fine! Good luck!

Resources

My End Game Keyboard

In the fall of last year I was with one of my co-workers who brought up mechanical keyboards. I'd been somewhat familiar with them, but only at the Razer, Corsair, CoolerMaster level. He mentioned he had on order one Input Club's K-Type boards so I did a little looking into that and it definitely sounded cool. I almost wanted one myself however I learned it was a group buy and I wouldn't be able to get one, shucks!

I continued researching and learning everything I could about mechanical keyboards and the community. I learned all about different types of switches, brands, PCBs, soldering, keysets, keycap profiles, creating your own custom layouts and firmwares, and so much more. During all this time and in browsing over Reddit and Discord I came across one keyset which I immediately knew I wanted. It was SA Camping.

SA Camping 1

Unfortunately for me, this was a relatively rare set and generally high priced so I mostly just forgot about it and moved on. However, a week ago or so, a local, actually one of the guys who runs some of the keyboard meet ups posted on r/mechmarket his SA Camping set he was looking to sell. I thought about it for a few minutes, but decided, it's a set I've always dreamed of having, it's being sold right here in Houston, why not?!

SA Camping 2

Unfortunately in all my excitement I didn't really check out the keyset close enough. I've been sticking with 60% boards and more specifically the DZ60 PCBs with a 2U left shift. The SA Camping is a sculpted set and doesn't come with an R4 2U shift key. So I ended up getting a new DZ60 PCB and the 2.25U plate so I could put together a new build. Thankfully this still allows me to have arrow keys, however, in place of my usual 1U right shift, I have a 1.75U shift key (which I've actually just programmed to be a '/ ?' key.

My DZ60 with SA Camping

This build is something I consider my end-game. It includes the SA Camping keyset which I've always wanted, the DZ60 PCB (with directional arrow keys), and Kailh Box Jades. Using it has been awesome so far, I definitely enjoy the sculpted SA set along with the Box Jades. I'll likely post another picture when I put on the other novelty keycaps.

The Lich King Build

Alright so it's not an entirely new build, but I did update several components on my PC this past weekend. While I definitely loved the Fractal case I had, I wanted to get something even more beautiful and decided on the NZXT H700i case. I also updated my CPU cooler to an NZXT Kraken X62 along with 3x AER RGB 140mm fans for further cooling (and awesome looks)!

The Lich King

Pulling everything out of the old case was relatively simple, however, I may have overestimated the sizing of everything as I had some troubles putting everything back together. I believe the NZXT Kraken X52 would have sufficed with 120mm fans as well as the 3x 140mm AER RGB fans, likely would have been fine at 120mm instead. In fact, I am thinking about buying a 3 pack of the 120mm fans for the front of the case and using the other 2 140mm fans elsewhere. The Kraken X62 is also pretty snug and touches one of the RAM chips, hopefully, that does not cause any issues. The radiator itself is also quite snug and touches one of the heat sink covers on the motherboard, again, I hope this does not cause any issues. All in all, though, the H700i is an amazing case, looks wonderful and is the new home to the Lich King himself. This build runs much cooler than previously, in fact when at load, it still runs cooler than the previous build when it was idle. I'm definitely satisfied with this "small" update to my gaming PC!

Building a Custom Keyboard

One evening while I was working with my co-worker he brought up mechanical keyboards and the fact he was getting Input Club's K-Type soon. This prompted me to do some further research and ~6 months later, I've put together 10 or so custom built keyboards. I've learned a ton about the different pieces and parts that make up keyboards, what's involved in building them, and how to program them to use your own unique layouts. I've also learned that I love 60% keyboards. They have all the keys I need and none of the keys I don't.

The following is one of my recent builds. The parts used for this build are:

  • PCB: DZ60
  • Plate: 2U left shift
  • OEM Stabilizers
  • DSA Granite keycaps
  • Kailh Burnt Orange switches
  • 60% Transparent plastic case
  • 1.8mm LEDs
  • MiniUSB cable
  • Finish Line Extreme Fluoro Grease

Picture 1

While recommended to connect your PCB to your computer to test the keys I did not do that for this build. Should you so be inclined I recommend this website. You can use a paper clip to test each switch mount point. Thankfully ever PCB I've gotten has never had any issues, likely why I've been skipping this step for the last few builds.

The next step I take is lubing the stabilizers. This allows things to stay a bit quieter when you're hitting keys like the space bar, the shift keys, enter and backspace. For my build I only needed the stabilizers on the space bar, enter and left shift. Put the lube where the metal touches the plastic and only a little. Don't go crazy wild. In all honesty, I've likely used too much as seen in the image. However, I've yet to see any issues from this, and it's definitely kept things moving nicely.

I've also seen this happen too many times. Don't forget to put your stabilizers on your PCB!!! Too many times people forget these and solder in all their switches and realize it. This will require you to desolder everything. Not fun.

Picture 2

On this specific build, I wanted to use one LED for my artisan keycap. LEDs are quite easy to add. Of note, the longer leg is the positive side so make sure to put that one into the '+' hole. Here's the LED soldered in:

Picture 3

You'll see the keycap and LED in a little bit! It's really cool! Of note, normally I would add LEDs (if I was going to have them on the build) after the switches. However, since this build is using box switches, they need to go first, under the switch.

Alright, once I've gotten the stabilizers snapped into place, I place the plate over the PCB and put switches into the 4 corners. This helps me not only keep the plate in place but also lets me start mapping the switches and keys for the bottom row. Trust me, it's no fun desoldering keys because you've put them in the wrong places! You don't need to push the keycaps all the way down on the switches, just make sure they look right and are in the correct placement.

Picture 4

Next, once I've made sure I've got my bottom row switches in the right places, I will solder those in along with the 4 corners. Again, this just ensures the plate is kept in place and isn't moving all over the place while I add in the other switches. It just adds a little extra stability and will make life easier when finishing up the build.

Now with the corners and bottom row soldered in, I finish adding the rest of the switches to the remaining holes. I'll usually use some tweezers, under the plate, each arm on a side of the switch. I do this because it helps pop the switch in by holding the plate up. I can take a short video of this if anyone is interested. You can put all your remaining switches in now, or do a row, then solder, then a row, then solder until finished, it's completely up to you.

Picture 5

Make sure you ensure your switches are straight and aligned properly. You don't want crooked keys, it's annoying and doesn't look good.

Picture 6

Picture 7

Whichever you've decided to do, the basics are just popping your switches into plate and PCB and soldering them to the board. This step is likely the longest and more difficult. I will admit though, having never soldered before, it's very easy to pick up. If you're concerned you can find some old electronics or a broken PCB to try first.

Next, once my soldering is done I put my finished PCB into my case, and screw it in. Once that's secure I add my keycaps.

Picture 8

Picture 9

Picture 10

If you want a custom layout you can definitely do so. In my case, since the default firmware is for a Windows keyboard, I had to do a little swapping of the keys so it would work properly in a macOS environment. If anyone wants my Windows DZ60 or macOS DZ60 layouts, I'd be happy to update this post with them! I can also do a post on how to setup firmware and how to flash your board if there's any interest.

At this point, you should be all done and have your very own, hand-built custom keyboard!

As noted previously, I used an artisan keycap for my Esc key. It's an alien or Xenomorph if you may. It's the first brand new artisan keycap I've ever bought and well worth it. Unfortunately, my iPhone 7+ cannot capture LEDs properly, likely due to shutter speed and the light. But here's an image of the keycap up close.

Picture 11

So there you have it, a general overview of building your own custom keyboard. Find below some resources and product links. KBDFans, NovelKeys, and PimpMyKeyboard are my go to places for keyboards and parts, I highly recommend them!

Part Links
Resources