Got the multiple-master-node (k8s terminology) problem solved. Now that etcd is running on all nodes, I can lose any of them and the cluster keeps running. Really I simply had to change a few parameters in the startup script to enable etcd and to start the other nodes in server mode rather than agent mode. (Before I had one node as master, and if it failed the cluster was in bad shape).
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-home-01 Ready control-plane,etcd,master 195d v1.29.4+k3s1 10.0.1.234 <none> Armbian 24.2.1 bookworm 5.10.160-legacy-rk35xx containerd://1.7.15-k3s1
k3s-home-02 Ready control-plane,etcd,master 44d v1.29.4+k3s1 10.0.1.235 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-18-amd64 containerd://1.7.15-k3s1
k3s-home-03 Ready control-plane,etcd,master 28m v1.29.4+k3s1 10.0.1.236 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-21-amd64 containerd://1.7.15-k3s1
There are no comments yet.