this post was submitted on 29 Jan 2026
192 points (91.7% liked)

Selfhosted

55801 readers
441 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my "lab" the point that when something breaks (and my wife and/or kids complain) it's more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it's a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don't have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don't seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it's never ending...

(page 2) 50 comments
sorted by: hot top controversial new old
[–] HamsterRage@lemmy.ca 2 points 6 days ago

As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.

It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.

Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.

The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an "IT Guy", although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I've been doing this for a while.

I really don't know how people without a similar level of experience can even begin to cope.

[–] RickyRigatoni@retrolemmy.com 1 points 6 days ago

Trying to get peertube installed just to be able to organize my video library was pain.

[–] brucethemoose@lemmy.world 1 points 6 days ago* (last edited 6 days ago) (6 children)

I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?

Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.

load more comments (6 replies)
[–] dieTasse@feddit.org 1 points 6 days ago

What is your setup? I have TrueNAS and there I use the apps that are easy to install (and the catalog is not small) and maintain. Basically from time to time I just come and update (one button click). I have networking separate and I had issues with Tailscale for some time, but there I had only 4 services in total, all docker containers and all except the Tailscale straight forward and easy to update. Now I even moved those. One as a custom app to TrueNAS and the rest to proxmox LXC - and that solved my tailscale issue as well. And I am having a good time. But my rule of thumb - before I install anything I ask myself if I REALLY need this, because otherwise I would end up with like a jillion services that are cool, but not really that useful or practical.

I think what I would recommend to you, find platform like TrueNAS, where lots of things is prepared for you and don't bother too much with the custom stuff if you don't enjoy. Also I can recommend having a test rig or VM so that you can always try first, if its easy to install and stable to use. There were occasions when I was trying stuff and it was just bothersome, I had to hack stuff and I was glad in the end I didn't "pollute" my main server with it.

[–] EncryptKeeper@lemmy.world 59 points 1 week ago* (last edited 1 week ago) (7 children)

If a project doesn’t make it dead simple to manage via docker compose and environment variables, just don’t use it.

I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.

Sometimes you see a program and it starts with “Clone this repo” and it has a docker compose file, six env files, some extra fig files, and consists of a front end container, back end container. Database container, message queueing container, etc… just close that web page and don’t bother with that project lol.

That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole

[–] mrnobody@reddthat.com 22 points 1 week ago (1 children)

I agree with that 3rd paragraph lol. That's probably some of my issue at times. As far IT goes, does it not get overwhelming of you had a 9 hour workday just to hear someone at home complain this other thing you run doesn't work and you have to troubleshoot that now too?

Without going into too much detail, I'm a solo operation guy for about 200 end users. We're a Win11 and Office shop like most, and I've upgraded pretty much every system since my time starting. I've utilized some self-host options too, to help in the day to day which is nice as it offloads some work.

It's just, especially after a long day, to play IT at home can be a bit much. I don't normally mind, but I think I just know the Windows stuff well enough through and through, so taking on new Docker or self host tools stuff is Apple's and oranges sometimes. Maybe I'm getting spoiled with all the turn key stuff at work, too.

load more comments (1 replies)
load more comments (6 replies)
[–] hesh@quokk.au 26 points 1 week ago* (last edited 1 week ago) (2 children)

I wouldn't say im stick of it, but it can be a lot of work. It can be frustrating at times, but also rewarding. Sometimes I have to stop working on it for a while when I get stuck.

In any case, I like it a lot better than being Google's bitch.

load more comments (2 replies)
[–] krashmo@lemmy.world 23 points 1 week ago (4 children)

Use portainer for managing docker containers. I prefer a GUI as well and portainer makes the whole process much more comfortable for me.

[–] WhyJiffie@sh.itjust.works 2 points 6 days ago

just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there's an ages old issue of this on their github repo, closed because they don't care.

It's matomo analytics, so not as bad as some big tech, but still.

[–] irmadlad@lemmy.world 12 points 1 week ago

+1 for Portainer. There are other such options, maybe even better, but I can drive the Portainer bus.

load more comments (2 replies)
[–] Pika@sh.itjust.works 20 points 1 week ago* (last edited 1 week ago) (10 children)

I'm sick of everything moving to a docker image myself. I understand on a standard setup the isolation is nice, but I use Proxmox and would love to be able to actually use its isolation capabilities. The environment is already suited for the program. Just give me a standard installer for the love of tech.

[–] WhyJiffie@sh.itjust.works 1 points 6 days ago (1 children)

unless you have zillion gigabytes of RAM, you really don't want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can't, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.

I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.

[–] Pika@sh.itjust.works 1 points 6 days ago (2 children)

For VMs, I fully agree with you, but the best part about Proxmox is the ability to use containers, or CTs, which share system resources. So unlike a VM, if you specify a container has two gigs of RAM, that just means that it has two gigs of RAM that it can use, unlike the VM where it's going to use that amount (and will crash if it can't get that amount)

These CT's do the equivalent of what docker does, which is share the system space with other services with isolation, While giving an easy to administrate and backup system, while keeping it able to be seperate by service.

For example, with a Proxmox CT, I can do snapshots of the container itself before I do any type of work, if where if I was using Docker on a primary machine, I would need to back up the Docker container completely. Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral. If I had to take troubleshooting bare bones versus troubleshooting a Docker container, I'm going to choose bare bones every step of the way.(You can even run an Alpine CT if you would rather keep the average Docker container setup)

Also for the over committing thing, be aware that your issue you've stated there will happen with a Docker setup as well. Docker doesn't care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

Anyway, long story short, Docker containers do basically the same thing that a Proxmox CT does. it's just ephemeral instead of persistent, And designed to be plug-and-go, which I've found in the case of running a Proxmox-style setup, isn't super handy due to the fact that a lot of times I would want to share resources such as having a dedicated database or caching system, Which is generally a pain in the butt to try to implement on Docker setups.

[–] WhyJiffie@sh.itjust.works 1 points 6 days ago (1 children)

oh, LXC containers! I see. I never used them because I find LXC setup more complicated, once tried to use a turnkey samba container but couldn't even figure out where to add the container image to LXC, or how to start if not that way.

but also, I like that this way my random containerized services use a different kernel, not the main proxmox kernel, for isolation.

Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral.

I don't understand this point. on docker, it's rare that you need to touch the Dockerfile (which contains the container image build instructions). did you mean the docker compose file? or a script file that contains a docker run command?

also, you can run commands or open a shell in any container with docker, except if the container image does not contain any shell binary (but even then, copying a busybox or something to a volume of the container would help), but that's rare too.
you do it like this: docker exec -it containername command. bit lengthy, but bash aliases help

Also for the over committing thing, be aware that your issue you've stated there will happen with a Docker setup as well. Docker doesn't care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

in docker I don't allocate memory, and it's not common to do so. it shares the system memory with all containers. docker has a rudimentary resource limit thingy, but what's better is you can assign containers to a cgroup, and define resource limits or reservations that way. I manage cgroups with systemd ".slice" units, and it's easier than it sounds

[–] Pika@sh.itjust.works 1 points 6 days ago

They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker's emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.

And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don't run Docker period, aside from the one that has the primary Docker stack. So I don't have that layer to worry about on most CT's

As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don't have enough RAM for what's been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it's ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it's ram. That's all I meant by it. Also cgroups sound cool, I gotta say I haven't messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm's be using those pools.

[–] EncryptKeeper@lemmy.world 1 points 6 days ago* (last edited 6 days ago) (1 children)

I’m really confused here, you don’t like how everything is containerized, and your preferred method is to run Proxmox and containerize everything, but in an ecosystem with less portability and tooling?

[–] Pika@sh.itjust.works 1 points 6 days ago* (last edited 6 days ago) (1 children)

I don't like how everything is docker containerized.

I already run proxmox, which containerizes things by design with their CT's and VM's

Running a docker image ontop of that is just wasting system resources. (while also complicating the troubleshooting process) It doesn't make sense to run a CT or VM for a container, just to put docker on it and run another container via that. It also completly bypasses everything that proxmox provides you for snapshotting and backup because proxmox's system is for the entire container, and if all services are running on the same container all services are going to be snapshotted.

My current system allows me to have per service snapshots(and backups), all within the proxmox webUI, all containerized, and all restricted to their own resources. Docker is just not needed at this point.

A docker system just adds extra headway that isn't needed. So yes, just give me a standard installer.

[–] EncryptKeeper@lemmy.world 1 points 6 days ago* (last edited 6 days ago) (1 children)

Nothing is “docker containerized”. Docker is just a daemon and set of tools for managing OCI compliant containers.

Running a docker image ontop of that is just wasting system resources.

No? If you spun up one VM in Proxmox and installed docker and used it to run 10 containers, that would use fewer system resources than running 10 LXC containers directly on Proxmox.

Like… you don’t like that the industry has adapted this efficient, portable, interchangeable, flexible, lightweight, mature technology, because you prefer the one that is heavier, less flexible, less portable, non-OCI compliant alternative?

[–] Pika@sh.itjust.works 0 points 6 days ago* (last edited 6 days ago) (1 children)

are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT's with docker installed then running their own containers(but that's not what I do, or what I am asking for).

I currently do use one CT that has docker installed with all my docker images, which I wouldn't do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.

One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn't good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.

For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I've seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT's do as they no longer share resources). When compared to 10 CT's that are finetuned to their specific app, you will have better performance running the CT's than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).

edit: clarification and general visibility so it wasnt bunched together.

[–] EncryptKeeper@lemmy.world 2 points 6 days ago* (last edited 6 days ago) (1 children)

are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

If those 10 single layer deep containers are Proxmox’s LXC containers then yes, absolutely. OCI containers are isolated processes that run single services, usually just a single binary. There’s no OS, no init system. They’re very lightweight with very little overhead. They’re “containerized services”. LXC containers on the other hand are very heavy “system containers” that have a full OS and user space, init system, file systems etc. They are one step removed from being full size VMs, short of the fact that they can share the hosts kernel and don’t need to virtualize. In short, your single LXC running docker and a bunch of containers inside of it is far more resource efficient than running a bunch of separate LXC containers.

One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine

I mean that’s exactly what docker containers do but more efficiently.

I can just snapshot the current setup and then rollback if it isn't good

I mean that’s sort of the entire idea behind docker containers as well. It can even be automated for zero downtime updates and deployments, as well as rollbacks.

When compared to 10 CT's that are finetuned to their specific app, you will have better performance running the CT's than a VM running everything

That is incorrect. Let’s break away from containers and VMs for a second and look deeper into what is happening under the hood here.

Option A (Docker + containers): One OS, One Init system, one full set of Linux libraries.

Option B (10 LXC containers): Ten operating systems, ten separate init systems, 10 separate sets of full Linux libraries.

Option A is far more lightweight, and becomes a more attractive option the more services you add.

And not only that, but as you found out, you don’t need to run a full VM for your docker host. You could just use an LXC. Though in that case I’d still prefer the one VM, so that your containers aren’t sharing your Proxmox Host’s kernel.

Like LXCs do have a use case, but it sounds like you’re using them to an alternative to regular service containers and that’s not really what it’s for.

[–] Pika@sh.itjust.works 1 points 6 days ago (1 children)

Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.

My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.

I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT's and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.

[–] EncryptKeeper@lemmy.world 1 points 6 days ago (1 children)

There are a number of reasons why your docker setup was using too much RAM, including just poorly built containers. You could also swap out docker for podman, which is daemonless and rootless, and registers container workloads with systemd. So if you’re married to the LXCs you can use that for running OCI containers. Also a new version of Proxmox enabled the ability to run OCI containers using LXCs so you can run them directly without docker or podman.

[–] Pika@sh.itjust.works 1 points 6 days ago

Yea I plan to try out the new Proxmox version at some point to try that out, thank you again.

load more comments (9 replies)
[–] atzanteol@sh.itjust.works 20 points 1 week ago (2 children)

Sounds like you haven't taken the time to properly design your environment.

Lots of home gamers just throw stuff together and just "hack things till they work".

You need to step back and organize your shit. Develop a pattern, automate things, use source control, etc. Don't just file follow the weirdly -opinionated setup instructions. Make it fit your standard.

[–] mrnobody@reddthat.com 2 points 6 days ago (1 children)

This. I definitely need to take the time to organize. A few months ago, I setup a new 4U rosewill case w 24 hotswap as bays. Expanded my storage quite a bit, but need to finish moving some services too. I went from a big outdated SMC server to reusing an old gaming mobo since its an i7 but 95w vs 125wx2 lol.

It took a week just to move all my Plex data cuz that Supermicro was only 1GbE.

[–] non_burglar@lemmy.world 2 points 6 days ago (1 children)

only 1gbE

What needs more than 1gbe? Are you streaming 8k?

Sounds like you are your own worst enemy. Take a step back and think about how many of these projects are worth completing and which are just for fun and draw a line.

And automate. There are tools to help with this.

[–] WhyJiffie@sh.itjust.works 2 points 6 days ago (1 children)

What needs more than 1gbe? Are you streaming 8k?

I think they wanted to mean it was a bottleneck while moving to the new hardware

[–] mrnobody@reddthat.com 1 points 6 days ago

Yeah, transferring 80TB took what felt like an eternity. My Plex has a 2.5GbE and my switch is 10GbE but my SFP+ NIC in the storage wasn't playing well..

load more comments (1 replies)
[–] chrash0@lemmy.world 20 points 1 week ago (4 children)

honestly, i 100% do not miss GUIs that hopefully do what you want them to do or have options grayed out or don’t include all the available options etc etc

i do get burnout, and i suffer many of the same symptoms. but i have a solution that works for me: NixOS

ok it does sound like i gave you more homework, but hear me out:

  • with NixOS and flakes you have a commit history for your lab services, all centralized in one place.
  • this can include as much documentation as you want: inline comments, commit messages, living documents in your repository, whatever
  • even services that only provide a Docker based solution can be encapsulated and run by Nix, including using an alternate runtime like podman or containerd
  • (this one will hammer me with downvotes but i genuinely do think that:) you can use an LLM agent like GitHub Copilot to get you started, learn the Nix language and ecosystem, and create Nix modules for things that need to be wrapped. i’ve been a software engineer for 15 years; i’ve got nothing to prove when it comes to making a working system. what i want is a working system.
[–] Fedegenerate@lemmynsfw.com 1 points 6 days ago

I'm gonna make the jump to nixOS eventually. I'm just about comfortable with YAML and only in the context of docker-compose. The leap from that to nix seems too great. I'll start this year though.

load more comments (3 replies)
[–] corsicanguppy@lemmy.ca 14 points 1 week ago* (last edited 6 days ago) (1 children)

You're not alone.

The industry itself has become pointlessly layered like some origami hell. As a former OS security guy I can say it's not in a good state with all the supply-chain risks.

At the same time, many 'help' articles are karma-farming 'splogs' of low quality and/or just slop that they're not really useful. When something's missing, it feels to our imposter syndrome like it's a skills issue.

Simplify your life. Ditch and avoid anything with containers or bizarre architectures that feels too intricate. Decide what you need and run those on really reliable options. Auto patching is your friend (but choose a distro and package format where it's atomic and rolls back easily).

You don't need to come home only to work. This is supposed to be FUN for some of us. Don't chase the Joneses, but just do what you want.

Once you've simplified, get in the habit of going outside. You'll feel a lot better about it.

load more comments (1 replies)
[–] TropicalDingdong@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

Proxmox?

And yes. Its like a full time job to homelab. Or a part time job. Its just hard, and sometimes things just don't work.

I guess one answer is to pick your battles. You can't win them all. But things are objectively better than they were in the past.

[–] Decronym@lemmy.decronym.xyz 11 points 1 week ago* (last edited 12 hours ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
IP Internet Protocol
IoT Internet of Things for device controllers
LAMP Linux-Apache-MySQL-PHP stack for webhosting
LXC Linux Containers
Plex Brand of media server package
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
SMB Server Message Block protocol for file and printer sharing; Windows-native
SSO Single Sign-On
VPS Virtual Private Server (opposed to shared hosting)

10 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

[Thread #40 for this comm, first seen 29th Jan 2026, 05:20] [FAQ] [Full list] [Contact] [Source code]

load more comments
view more: ‹ prev next ›