#postgres

bkoehn@diaspora.koehn.com

I use a single #Postgres instance in my #k8s clusters, with separate users, passwords, and databases in that instance for each application. It’s simpler to manage and reduces complexity, and my databases don’t benefit from being spread across nodes (they’re small and place few demands on the system). So the same instance hosts the databases for synapse and diaspora, for example.

Today I discovered that objects in the public schema of each database (which is where they’re created by default) are accessible to all users on the database, unless they’re specifically revoked. So the system wasn’t as secure as I thought it was.

You can change this behavior with REVOKE ALL ON DATABASE <name> FROM PUBLIC;. Then only users granted access (or the owner) will have access. You can also create objects in non-public schemas, but that can be challenging.

One more thing to automate.

bkoehn@diaspora.koehn.com

I started messing about with #postgres FTS in #Diaspora.

First, create a search column:

alter table posts add column fts_text_en tsvector generated always as (to_tsvector ('english', text )) stored;

Then, create a GIN a index on the new column:

create index posts_fts_text_en_idx on posts using gin(fts_text_en);

Next, query the column:

select id from posts where fts_text_en @@ websearch_to_tsquery('english', 'Linux');

Fast, simple. Prolly should integrate it into the search bar.

bkoehn@diaspora.koehn.com

I upgraded the #postgres instance that the pod uses to use WAL-G 2.0.0, which looks to be a fairly substantial upgrade. For me it initially meant upgrading the encryption key used for backups from 20 bytes to 25 bytes (which pretty much necessitated starting a new stream of backups); everything else seems to be about the same.

WAL-G allows point-in-time-recovery (PITR) of Postgres databases to be stored on S3-compatible storage (including the FOSS Minio S3 server). It's continuous, leveraging Postgres' built-in write-ahead log (WAL) to provide the source data for the backups. WAL-G then compresses and (optionally) encrypts the data before sending it to your S3 storage.

If you're running a #Diaspora pod, you owe it to yourself and to your users to maintain reliable offsite backups. Once WAL-G is configured you can pretty much forget about it, except for occasional testing to make sure the backups are recoverable.

bkoehn@diaspora.koehn.com

Decided to spin up a local k3s cluster running on my (ARM64) laptop. Another interesting bit about the Docker environment is how easy it is to migrate configurations across platforms.

I'll add that spinning up a cluster in k3s is just running a single command per node; one for the master node and one for each of the server nodes. It's trivial to automate and completes in seconds.

Now I'm messing around with #ceph for managing high-availability #storage (filesystem and #s3) and #stolon for high-availability #postgres.

#docker #kubernetes #k3s #k8s #arm64 #buildx #ha