It's a mess. I'm even moving to a different field in it due to this.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I definitely feel the lab burnout, but I feel like Docker is kind of the solution for me... I know how docker works, its pretty much set and forget, and ideally its totally reproducible. Docker Compose files are pretty much self-documenting.
Random GUI apps end up being waaaay harder to maintain because I have to remember "how do I get to the settings? How did I have this configured? What port was this even on? How do I back up these settings?" Rather than a couple text config files in a git repo. It's also much easier to revert to a working version if I try to update a docker container and fail or get tired of trying to fix it.
I'm currently running three hosts with a collection of around 40 containers.
one is the house host, one is the devops host, and one is the AI host.
I maintain images on the devops host and deploy them regularly. when one goes down or a container goes down, I am notified through mqtt on my phone. all hosts, services, ports, certs, etc are monitored.
no problems here. git gud I suppose?
And honestly, 40 isn't even impressive. I run more than that on one host. Containers make life so much easier is unreal.
Once you understand them, I suppose its easier. I've got a mix of win10, Linux VMs, RPis, and docker.
Having grown up on Windows, it's second nature now and I do it for work too. I stated on Linux only around 2010 or so but kept flipping between the2 . anymore, trying to cut the power bill and went RPi but also trying to cut others and so docker is still relatively new in the last few years. Understand that I also do it few and far between at times on projects so is hard to dedicate time to learn enough to be comfortable. It also didn't help I started on Docker Desktop and apparently everyone hates that and may have been a part of my problem adopting it.
I probably also started with linux seriously around that time frame. I was also a Windows admin back then. Transitioning to Linux and containers was the best thing ever. You get out of dependency hell and having kruft all over your filesystem. I'm extremely biased though, I work for Red Hat now. Containers and Linux are my day job.
Dang, how'd you make that transition? Are you a dev or SWE?
I just liked linux better so I learned it. That's kind of my whole career, I want to do something so I get certified in it and start looking to get into it. I'm in consulting. I come in and help people setup OpenShift while teaching them how to use it and then move on to the next customer.
As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.
It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.
Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.
The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an "IT Guy", although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I've been doing this for a while.
I really don't know how people without a similar level of experience can even begin to cope.
I don't consider an app deployable until I can run a single script and watch it run. For instance I do not run docker/podman containers raw, always with a compose and/or other orchestration. Not consciously but I probably kill and restart it several times just to be sure it's reproducible.
I manage all my services with systems. Simple services like kanidm, that are just a single native executable run baremetal with a different user. More complex Setups like immich or anything that requires a pzthon venv runs from a docker compose file that gets managed by systemd. Each service has its own user and it's own directory.
What is your setup? I have TrueNAS and there I use the apps that are easy to install (and the catalog is not small) and maintain. Basically from time to time I just come and update (one button click). I have networking separate and I had issues with Tailscale for some time, but there I had only 4 services in total, all docker containers and all except the Tailscale straight forward and easy to update. Now I even moved those. One as a custom app to TrueNAS and the rest to proxmox LXC - and that solved my tailscale issue as well. And I am having a good time. But my rule of thumb - before I install anything I ask myself if I REALLY need this, because otherwise I would end up with like a jillion services that are cool, but not really that useful or practical.
I think what I would recommend to you, find platform like TrueNAS, where lots of things is prepared for you and don't bother too much with the custom stuff if you don't enjoy. Also I can recommend having a test rig or VM so that you can always try first, if its easy to install and stable to use. There were occasions when I was trying stuff and it was just bothersome, I had to hack stuff and I was glad in the end I didn't "pollute" my main server with it.