#dovecot

bkoehn@diaspora.koehn.com

It took way longer than it should have, but I eventually built a Dovecot plugin that adds support for SCRYPT (a password hashing algorithm). My poor cloud servers take too long to compute ARGON2 hashes (which are harder to attack than other algorithms), hence the plugin.

Some kind soul already wrote the code, but it didn’t work on modern versions of Dovecot, and I wanted it built into a Debian package I could add on to a Dovecot in a Docker image. So off to work.

Gitea has a built-in repository for Debian packages, and I used its Gitea Actions to automate the build, packaging and uploading, then I tweaked my Dovecot image to include my new repository and install the plugin from there.

I never built a Debian package before; it turns out that they’re quite simple most of the time, just the files/directories you want to install and some metadata files to indicate dependencies, architectures, versions, etc.

The hardest bit was understanding the API change and fixing the code. Along the way I learned more about Linux libraries and the tools for inspecting them.

All in all it was a fun side project to tackle. I learned the #Dovecot #API, #LibSodium, #Gitea Actions, #Linux libraries, and #Debian packaging. Not a bad way to spend a slow time in my work schedule.

bkoehn@diaspora.koehn.com

It took way too long, but my #ChatOps quest continues. Today I finished adding #Matrix support to my #Dovecot Sieve scripts, so that things like #DKIM and MTA-STS TLS reports could go to a matrix chat channel rather than sitting in a mail folder. Basically I now have an email 👉️ matrix bridge.

It took a fair amount of fussing about, mostly because (a) it takes matrix commander (a Matrix CLI) a long time to post a message to Synapse (read: more than ten seconds), (b) Dovecot’s documentation for altering settings for external scripts is byzantine, and (c) rather than cramming matrix commander into my already bloated Dovecot #Docker image I wanted to use the Docker image they provide, and I needed to work out a way to invoke matrix-commander from another container on the same pod.

But now it’s done, and I have another vector for #admin alerts that I can coalesce into a single place for easy review.

bkoehn@diaspora.koehn.com

Alright, after a bit more puttering about I've got my #k3s #Kubernetes cluster networking working. Details follow.

From an inbound perspective, all the nodes in the cluster are completely unavailable from the internet, firewalled off using #hetzner's firewalls. This provides some reassurance that they're tougher to hack, and makes it harder for me to mess up the configuration. All the nodes are on a private network that allows them to communicate with one another, and that's their exclusive form of communication. All the nodes are allowed any outbound traffic. The servers are labeled in Hetzner's console to automatically apply firewall rules.

In front of the cluster is a Hetzner firewall that is configured to forward public internet traffic to the nodes on the private network (meaning the load balancer has public IPv4 and IPv6 addresses, and a private IPv4 address that it uses to communicate with the worker nodes). The load balancer does liveness checks on each node and can prevent non responsive nodes from receiving requests. The load balancer uses the PROXY protocol to preserve source #IP information. The same Hetzner server labels are used to add worker nodes to the load balancer automatically.

The traffic is forwarded to an #nginx Daemonset which k3s keeps running on every node in the cluster (for high availability), and the pods of that DaemonSet keep themselves in sync using a ConfigMap that allows tweaks to the nginx configuration to be applied automatically. Nginx listens on the node's private IP ports and handles #TLS termination for #HTTP traffic and works with Cert-Manager to maintain TLS certificates for websites using #LetsEncrypt for signing. TLS termination for #IMAP and #SMTP are handled by #Dovecot and #Postfix, respectively. Nginx forwards (mostly) cleartext to the appropriate service to handle the request using Kubernetes Ingress resources to bind ports, hosts, paths, etc. to the correct workloads.

The cluster uses #Canal as a #CNI to handle pod-to-pod networking. Canal is a hybrid of Calico and Flannel that is both easy to set up (basically a single YAML) and powerful to use, allowing me to set network policies to only permit pods to communicate with the other pods that they need, effectively acting as an internal firewall in case a pod is compromised. All pod communication is managed using standard Kubernetes Services, which behind the scenes simply create #IPCHAINS to move traffic to the correct pod.

The configuration of all this was a fair amount of effort, owing to Kubernetes' inherent flexibility in the kinds of environments it supports. But by integrating it with the capabilities that Hetzner provides I can fairly easily create an environment for running workloads that's redundant and highly secure. I had to turn off several k3s "features" to get it to work, disabling #Traefik, #Flannel, some strange load balancing capabilities, and forcing k3s to use only the private network rather than a public one. Still, it's been easier to work with than a full-blown Kubernetes installation, and uses considerably fewer server resources.

Next up: storage! Postgres, Objects, and filesystems.