this post was submitted on 13 Dec 2025
49 points (98.0% liked)

Selfhosted

53767 readers
384 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello, so, I have been self-hosting some basic stuff recently, including data storage so i don't have to rely on external services like google drive.

It's working fine, but I wondered what would be the best backup solutions in case something unexpected and unfortunate happens (accidentally wipe out everything, drives dying, electrical issues, house burning down, that sort of thing).

I was wondering if more experienced self-hosters had recommendations about that ?

Maybe storing a physical drive in an especially sturdy box ? Perhaps using distant cold storage solutions ? Or even something I have never heard of ?

you are viewing a single comment's thread
view the rest of the comments
[–] witten@lemmy.world 1 points 5 days ago (1 children)

The only disadvantage I find is that there is no cross system deduplication.

You could achieve this by having all machines write to a single Borg repository, where everything would get deduplicated. But downsides include: 1. You lose everything if something goes wrong with that one repo, and 2. You'd have to schedule backups across all systems so as not to run at the same time, because the single repo can only have a single writer at once.

[–] HelloRoot@lemy.lol 1 points 5 days ago* (last edited 5 days ago) (1 children)

I tried that once and it takes way longer to run a backup. I forgot ehy exactly, something with running the comparisons against everything.

[–] witten@lemmy.world 2 points 5 days ago

It makes a certain amount of sense. More deduplication means more CPU (and IO) spent on that work.