#k8s
I switched to a default deny policy on both my home LAN and DMZ. I don’t know how big a PITA it will be, but I’ll find out soon.
Nothing is allow inbound to either network; all connections to the DMZ pass through HAProxy before being distributed to the ingress controllers on the various nodes of the #k8s cluster.
Got the multiple-master-node (k8s terminology) problem solved. Now that etcd is running on all nodes, I can lose any of them and the cluster keeps running. Really I simply had to change a few parameters in the startup script to enable etcd and to start the other nodes in server mode rather than agent mode. (Before I had one node as master, and if it failed the cluster was in bad shape).
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-home-01 Ready control-plane,etcd,master 195d v1.29.4+k3s1 10.0.1.234 <none> Armbian 24.2.1 bookworm 5.10.160-legacy-rk35xx containerd://1.7.15-k3s1
k3s-home-02 Ready control-plane,etcd,master 44d v1.29.4+k3s1 10.0.1.235 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-18-amd64 containerd://1.7.15-k3s1
k3s-home-03 Ready control-plane,etcd,master 28m v1.29.4+k3s1 10.0.1.236 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-21-amd64 containerd://1.7.15-k3s1
I use a single #Postgres instance in my #k8s clusters, with separate users, passwords, and databases in that instance for each application. It’s simpler to manage and reduces complexity, and my databases don’t benefit from being spread across nodes (they’re small and place few demands on the system). So the same instance hosts the databases for synapse and diaspora, for example.
Today I discovered that objects in the public
schema of each database (which is where they’re created by default) are accessible to all users on the database, unless they’re specifically revoked. So the system wasn’t as secure as I thought it was.
You can change this behavior with REVOKE ALL ON DATABASE <name> FROM PUBLIC;
. Then only users granted access (or the owner) will have access. You can also create objects in non-public schemas, but that can be challenging.
One more thing to automate.
More fun with #just recipes. This one pulls together a bunch of tasks I need to do when I create a bucket, account, user, and policy for s3, storing the credentials in 1Password. I’ll probably have it output a #k8s secret as well.
# create a new bucket, account, and policy
new-bucket-account bucket:
#!/usr/bin/env bash
set -uo pipefail
mc mb "$TARGET/{{ bucket }}"
USER="$(pwgen 20 1)"
PASSWORD="$(pwgen 40 1)"
mc admin user add "$TARGET" "$USER" "$PASSWORD"
ACCOUNT="{{bucket}} s3 account"
op item create --vault k8s --title "$ACCOUNT" --tags k8s,minio - username="$USER" password="$PASSWORD"
mc admin policy create "$TARGET" "{{bucket}}" <(sed 's/BUCKET/{{bucket}}/' < policy-template.json)
mc admin policy attach "$TARGET" "{{bucket}}" --user "$USER"
@echo "added \"$ACCOUNT\" to 1password"
Meine erste Zertifikatserneuerung in der immer noch frischen Kubernetes-Infrastruktur verlief zumindest unfallfrei. Nennt mich Captain SOPS!
Ab 2:00 konnte ich nicht mehr schlafen. Senile Bettflucht, wer kennt das nicht.
Aufgestanden, Kaffee gemacht und mal geschaut, was die #CLT23 noch an coolen Talks zu bieten hat. Ah, was mit #kubernetes und #security. Sehr schön.
Das war ein Fehler! Jetzt kann ich wahrscheinlich nie mehr schlafen...
#k8s #CLT #ccc #Chaos #PoweredByRSS
♲ Chaos Computer Club - Media (Inoffiziell) - 2023-03-11 23:00:00 GMT
Hacking Kubernetes Cluster and Secure it with Open Source (clt23)
https://mirror.selfnet.de/CCC/events/clt/2023/h264-hd/clt23-99-deu-Hacking_Kubernetes_Cluster_and_Secure_it_with_Open_Source_hd.mp4
Devoted some time to continue to tear down my #Kubernetes #k8s infrastructure at #Hetzner and move it to my #k3s infrastructure at #ssdnodes. It's pretty easy to move everything, the actual work involving moving files and databases and a bit of downtime. As I relieve the old infrastructure I can save some money by shutting down nodes as the workload decreases. I've shut down two nodes so far. Might free up another tonight if I can move #Synapse and Diaspora.
Last night I installed the new #Canal #CNI (#Calico + #Flannel) on the new #k3s #Kubernetes cluster in the same way I've always done it on the old #k8s cluster, neglecting the clear instructions to apply any changes from the original configuration to the new one. Those changes included little things like telling Flannel which interface to use, what IP range to allocate, and other trivialities. Wow did I blow that cluster to bits. Following the directions and deleting a few very confused pods fixed the issue.
Anyway, it's working now, and I have a better process in place to manage CNI upgrades.
Decided to spin up a local k3s cluster running on my (ARM64) laptop. Another interesting bit about the Docker environment is how easy it is to migrate configurations across platforms.
I'll add that spinning up a cluster in k3s is just running a single command per node; one for the master node and one for each of the server nodes. It's trivial to automate and completes in seconds.
Now I'm messing around with #ceph for managing high-availability #storage (filesystem and #s3) and #stolon for high-availability #postgres.
Hey everyone, I’m (sorta) #newhere. I’m interested in #foss, #golang, #ipfs, #k8s, #ambient #mathrock, #matrix, and #ipfs.
It’s been a while since I’ve directly used diaspora, although I have indirectly via #hubzilla.
Just setup my own pod using a #helm chart I’ve been working on. Excited to be back!
Qwant, Microsoft et Vivatech
Tristan Nitot partage quelques éléments autour du retour en vigueur du partenariat entre #qwant et #microsoft. Bref, on avait déjà pu lire la même argumentation sur le Fediverse hier: c'est pour l'indexation, les données personnelles ne sont pas concernées..
Entre les techniciens qui nous expliquent qu'il y avait guère le choix (ben oui si tu veux faire du #kubernetes, pas le choix.. Et pourquoi, il faut faire aujourd'hui du #k8s : parce que, c'est le top man, bla bla tout ça) et les critiques (c'est tout de même microsoft, il y avait pas moyen de travailler avec #ovh ou #gandi, avec d'autres orchestrateurs de conteneurs), bon j'ai eu un peu fait mon choix..
J'ajoute que :
La fin est dans les moyens comme l'arbre est dans la graine (Gandhi)
ou qu'il n'y a pas de petits renoncements innocents :
Quand les nazis sont venus chercher les communistes, je n’ai rien dit, je n’étais pas communiste.
Quand ils ont enfermé les sociaux-démocrates, je n’ai rien dit, je n’étais pas social-démocrate.
Quand ils sont venus chercher les syndicalistes, je n’ai rien dit, je n’étais pas syndicaliste.
Quand ils sont venus me chercher, il ne restait plus personne pour protester
source : Quand ils sont venus chercher… est une citation du pasteur Martin Niemöller
Au fait #Vivatech, c'est un concours de beauté ou le gouvernement est venu jouer les modernes : C'est fou qu'un service devienne fréquentable à partir du moment où il renforce son partenariat avec #Microsoft ??