skilltheamps

joined 1 year ago
[–] skilltheamps@feddit.org 1 points 1 month ago

btrbk ... && curl https://uptime.my.domain/api/push/... is exactly what I do in a systemd service with nightly timer. Uptime Kuma sends a matrix message (via a bot account on matrix.org) if it doesn't get a success notification in 25h. I have two servers in different locations that do mutual backups and mutual uptime kuma monitoring. Should both servers go down at the same time, there's also some basic and free healthcheck from my dynamic-ipv6 provider https://ipv64.net/, so I also get an email if any of the two uptime kumas cannot be reached anymore.

[–] skilltheamps@feddit.org 7 points 1 month ago

You need to ask yourself what properties you want in your storage, then you can judge which solution fits. For me it is:

  • effortless rollback (i.e. in case something with a db updates, does a db migration and fails)
  • effortless backups, that preserve database integrity without slow/cumbersome/downtime-inducing crutches like sql dump
  • a scheme that works the same way for every service I host, no tailored solutions for individual services/containers
  • low maintenance

The amount of data I'm handling fits on larger harddrives (so I don't need pools), but I don't want to waste storage space. And my homeserver is not my learn and break stuff environment anymore, but rather just needs to work.

I went with btrfs raid 1, every service is in its own subvolume. The containers are precisely referenced by their digest-hashes, which gets snapshotted together with all persistent data. So every snapshot holds exactly the amount of data that is required to do a seamless rollback. Snapper maintains a timeline of snapshots for every service. Updating is semi-automated where it does snapshot -> update digest hash from container tags -> pull new images -> restart service. Nightly offsite backups happen with btrbk, which mirrors snapshots in an incremental fashion on another offsite server with btrfs.

[–] skilltheamps@feddit.org 2 points 1 month ago (1 children)

Rootless podman cannot bind ports <1024, only root can by default (on pretty much any distro I guess). Have you done something like sysctl net.ipv4.ip_unprivileged_port_start=80 to allow non-root processes to bind to port numbers >=80?

[–] skilltheamps@feddit.org 4 points 1 month ago

That's what I thought, but last time I looked I only saw a "release" tag, no "v2" tag. Did I miss something?

[–] skilltheamps@feddit.org 3 points 1 month ago

That server is also a homeserver I manage for family (in another city). The two homeservers then mutually back up each other.

[–] skilltheamps@feddit.org 2 points 1 month ago

They show images from the same day in past years. So if your library has no images >= 1 year old I'm not sure if anything shows up.

[–] skilltheamps@feddit.org 8 points 1 month ago (2 children)

The same way as all other services: all relevant data (compose.yml and all volume mounts) are in a btrfs subvolume. Every night a snapshot gets made and mirrored to a remote server by btrbk.