Selfhosted

51510 readers
172 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

Hey, so I recently had the idea of proposing some new ideas, I had for the IT infrastructure of my local scouts organisation, mainly it's own nextcloud instance and website (and if that works well, maybey a matrix server and wiki, but website and nextcloud are much higher priority right now). But, I am wondering, what the best way to do the hosting would be. Using a VPS would be pretty nice, because there would be no upfront cost, but we would have to pay monthly fee and that's pretty hard to pitch for a new and untested idea, especially because we don't have that much regular funds/income. The other option would be to self host on hardware that stays in the building, but I am not quite shure, but then we would have a pretty steep upfront cost and I am not 100 percent shure, if we even have a proper network in the building.

The main thing, I am trying to ask here is, if any of you have ever done something similar before and if so, how you did it. Also I am thankful for any advice in general. I have done this already for my family, but doing this for an entire organistation is an entirely different thing. Thank you very much in advance!

3
 
 

This is less of an issue with movies and much more of a problem with TV shows. It seems many of the shows I watch aren't encoded with subs included.

I've got the Open Subtitles plugin installed in Jellyfin.

If I set my library to only download perfect matches, it gets almost none.

If I set it to grab any, they're more-often-than-not mistimed, and then I have to take the manual shotgun approach. Doing this for each episode creates massive admin overhead.

Is there a better way?

4
 
 

Checking for this, i see that the only one was the old unshort.link, but the repo was discontinued

5
 
 

I've been recently dabbling on rust, and I am have been mostly doing that on my laptop. However, I also have a desktop and once in a while I would like to resume my stuff from the laptop, but without manual file transfers.

I know git by design does this, but I would like to use my current docker setup with Ubuntu server to have a very simple git server.

What would be the simplest git server to have in this situation? Keep in mind I am not planning to expose none of this to the internet

6
 
 

Mind that I am very noob into self-hosting, reverse proxies and the like

When I saw that Caddy automatically handled the HTTPS thingies I was like "this is my moment then to go into self-hosting". Caddy seemed so simple.

Turns out... I am suddenly discovering that the connection between the caddy machine and the Home Assistant machine (both in the local network) is non-encrypted. So if another appliance in my local network went rogue... bum, all my info gets leaked... right?

This might sound weird because it might actually be super-duper complicated but... how come in 2025 we still don't auto-encrypt local comms?

Please be kind. Lot's of love. Hopefully I'll dig my way to self-hosting wisdom.

7
 
 

I recently upgraded my setup from an RPi running DietPi to a Beelink 14 (N150) running Proxmox. So far it’s been fun screwing around with it, creating VMs and LXCs, and getting to learn the ways of Proxmox.

My latest obstacle, however, was migrating my Plex setup from the RPi to the Beelink, I have created an unprivileged LXC and setup Plex manually. I know there is a Community Helper Script for it but where is the fun in that.

Anyway, I am trying to enable HW acceleration and can’t seem to passthrough the GPU drivers to the LXC without breaking things (thankfully I have a backup that I always restore to once things break).

I looked up tutorials online that might help but I can’t seem to find anything applicable, mostly people suggest to just use the Community Helper Script and get it over with. There isn’t much I can learn doing it the easy way.

Can anyone suggest to me how to go ahead with this or at least point me in the right direction?

Thank you.

8
 
 

I wanted to share a service I'm hosting, but didn't feel comfortable just leaving publicly accessible, even behind a reverse proxy. In the meantime I did not want to give access to my whole lan with a VPN, or redirect all internet traffic from a client thru my network. So the idea is to run a WireGuard instance on my OpenWRT router in a completely isolated zone (input, output and forward set to reject on firewall) and then forward a single port from the service host's. Client is android, so using WG Tunnel and split tunnel just for the relevant app should not impair client's network access. Initial tests seems to be ok, is there anything I may have overlooked? Please feel free to comment.

9
 
 

I am trying to set up a home server. Here is how I'm planning to do it:

/dev/nvme0n1 SSD, Proxmox, VMs & CTs /dev/sda HDD, Media library /dev/sdb HDD, Backup

I've installed Proxmox on the NVMe SSD and created a few VMs and CTs to play with.

I have also partitioned /dev/sda and created a ZFS partition on /dev/sda1, made a pool /pool and a datastore /pool/data.

I plan to put media files on /pool/data, bind mount it on a container and run Jellyfin to serve them.

I can schedule backup jobs for the VMs and CTs themselves on Proxmox, but I'm not sure how to backup the media files on /pool/data to /dev/sdb.

  1. How would one go about setting up such backups? Do I need to setup something like a cronjob with rsync or is there some easier ready-made solution? Ideally it'd be something like Proxmox's VM backup jobs that let me prune and keep some copies daily / weekly / monthly / yearly.

  2. What filesystem should I use for the backup drive / partition? Is there advantage of using ZFS to backup ZFS?

  3. Can ZFS snapshots be used on /pool/data for additional protection? If so how do I set up, for example, automatic daily snapshots? Do snapshots take up little space if the files rarely change?

Thanks.

10
 
 

I'm looking for a self hosted calendar that supports multiple users, runs in docker, and is easy to integrate into home assistant and a phone app. Does anything like this exist or should i lower my expectations?

11
12
 
 

Hey all, I'm relatively new to the selfhosting game the most I've done to date is own and maintain a plex server for the last few years, but that mainly handles all of the networking for me so I'd say it doesn't really count.

Recently, due in part to the ongoing controversy with audibles royalty and streaming model I've decided to try my hand at setting up an Audiobookshelf server of my own. For reference I'm running on a machine with Ubuntu 20.04. Ive managed to get Audiobookshelf and nginx running through docker and accessible via the localhost:port, but now I feel like I'm missing some key understandings.

I assume I need to have a domain name through a DNS service like cloudflare in order to make use of it, but I'm not sure what to do after that and the documentation that I have read doesn't outright answer my questions.

Once I have my DNS setup, how do I associate it with my server or point it through the nginx reverse proxy?

I know I'll have to setup a .conf file for nginx at some point and I found the example .conf in the audiobookshelf documentation, but I just feel like I'm missing the step between getting a domain name and establishing the reverse proxy.

Any help would be greatly appreciated, thanks!

13
 
 

Links are almost always base64 encoded now and the online url decoders always produce garbage. I was wondering if there is a project out there that would allow me to self-host this type of tool?

I'd probably network this container through gluetun because, yanno, privacy.

Edit to add: Doesn't have to be specifically base64 focused. Any link decoder that I can use in a privacy respecting way, would be welcome.

Edit 2: See if your solution will decode this link (the one in the image): https://link.sfchronicle.com/external/41488169.38548/aHR0cHM6Ly93d3cuaG90ZG9nYmlsbHMuY29tL2hhbWJ1cmdlci1tb2xkcy9idXJnZXItZG9nLW1vbGQ_c2lkPTY4MTNkMTljYzM0ZWJjZTE4NDA1ZGVjYSZzcz1QJnN0X3JpZD1udWxsJnV0bV9zb3VyY2U9bmV3c2xldHRlciZ1dG1fbWVkaXVtPWVtYWlsJnV0bV90ZXJtPWJyaWVmaW5nJnV0bV9jYW1wYWlnbj1zZmNfYml0ZWN1cmlvdXM/6813d19cc34ebce18405decaB7ef84e41 (it should decode to this page: https://www.hotdogbills.com/hamburger-molds)

14
 
 

This is the post on reddit: https://www.reddit.com/r/selfhosted/comments/1myldh3/i_built_youtubarr_the_sonarr_for_youtube/

looks cool I have been wanting something like this for a while

15
 
 

So with the most recent Spotify nonsense I've finally had enough and I'm going back to mp3. Unfortunately, I haven't had to do this since Bush left office and I do not have the free time to manually sort and document every single file I have. I've been using MusicBrainz Picard but I don't know if the learning curve is steeper than I have traction for or if it's just really picky.

Anyone got suggestions on how to better manage all my jams? I'm trying to make it user friendly as I can for the family and so far I'm not winning lol

16
 
 

I realize that Proxmox suggests not to run rclone inside an LXC because it might cause problems backing up/snapshotting that container, but that’s not a concern of mine at the moment.

The issue I am running into is the following:

  1. Used Proxmox helper script to create Plex LXC, it worked flawlessly.

  2. Installed docker inside that LXC and pulled Zurg. Yes, I know it’s not recommended but I am not spinning up a whole docker vm for just this service.

  3. I then use Zurg to mount my box to /mnt/zurg and I start seeing its contents shortly after.

Now the problem is I have only 8GBs assigned to the Plex LXC which should be more than plenty. What’s happening is that the container is reporting to be full because of the rclone mount (~1TB in size) which is preventing write operations to the LXC.

This wasn’t an issue when I hosted those same two services on my ol’ trusty RPi-3B as it didn’t account for the size of the mount when doing df but for some reason the Plex LXC does.

Has anyone run across this before? What’s a good solution or workaround?

Thank you

17
1
submitted 5 days ago* (last edited 5 days ago) by kiol@lemmy.world to c/selfhosted@lemmy.world
 
 

Been running my own podcast on Castopod for the last year and it has been quite the learning experience. First, realized that part of running the show was making it available through mainstream platforms, but started with basic RSS feed and fediverse integration (for Mastodon users and such).

op3 analytics easily allows anyone to have basic understanding of their audience. Added basic podcasting 2.0 support, which also allowed IPFS support, but still haven't dug too deep into this (beyond knowing it is working). Added transcriptions with local-only Whisper and chapter support with ChapterTool, because people expect this in podcasting 2.0 clients.

Setup a chat on matrix.org and got a friend to help with a Draupnir moderation bot (which we were also testing for a community Open Source project chat). Decided to migrate my domain to a new registrar supporting Let's Encrypt certificates natively (I was maintaining them via a cron command unofficially, otherwise not supported by the domain registrar). Transition was smooth and no problem.

Created a dedicated podcast email account for people to contact the show and migrated my email smtp/imap to a dedicated service I could trust (and use as a relay once I eventually begin selfhosting the email server as well). Added a Flarum forum, since somewhere is needed for longer form conversations. Plugged in Uptime Kuma for monitoring and added all of my services to FreshRSS in order to keep tabs on all of my work. These days I'm wishing I'd simply used a wiki, or even a collaborative chat platform like HedgeDoc. Found LimeSurvey a bit too much for my needs, but Nextcloud Forms has worked just fine for people to send in their anonymous feedback.

Things are fairly quiet in terms of the show, but working out just fine. No doubt I'm forgetting tons of steps in regards to all of what I've learned, but it has been a fruitful year. Been using flat VPN network approach to connect to any servers and homelab applications being tested. Looking forward to more progress this next year. You can checkout the show here if you are curious.

18
 
 

This list is an absolute gem in finding what are the trending state-of-the-art open source programs. I have found so many cool open source projects I feel addicted to browsing more..

19
 
 

First off, I think I have the right basic plan but am open to exploring some others, especially if I have some basic assumptions wrong.

GOAL: Allow myself and family to access my various hosted apps without needing to teach VPN or Tailscale.

What I have: Proxmox with Home Assistant VM and some additional LXC containers. I do not intend to do Docker, not interested unless I have to (actually, isn't HAOS using Docker, but beyond that one.) IT Tools is one nice to have I want to get to from work and works as a good test case. Some other LXCs are one for Caddy and one for PiHole. Also, I registered a personal domain through CloudFlare.

PLAN: Get Caddy working internally using PiHole pointing my domain to the internal IP of Caddy and CloudFlare Tunnels and DNS to help get to services from outside going Cloud flare -> Caddy -> service.

I don't know why but nothing I tried seems to work, I can't find a good Caddy HowTo or maybe the relatively recent update of Caddy to v2 changed things??? I don't know. But I really just need some help walking through some of this Caddy setup.

I am going to make another attempt today and will replay with progress. But right now Caddy file is basically back to original OTB. I did add the module for CloudFlare and at one point confirmed that was loaded. I assume that module is needed to get Caddy to talk to CF DNS to confirm I own the domain for creating the certs, but I couldn't figure out what to do with the token, one guide had me adding something to the Caddy file but that just created an error.

So any comments on the plan or insightful nuggets would be appreciated.

20
 
 

So I'm not really a selfhoster per say - but I run 2 NAS at home and I am working my way toward it. I don't really need my stuff open to the internet so I just use it on my LAN.

However I do have a lot of data, and I'm constantly backing things up. My question is - I have the following setup:

  1. Computer hard drives
  2. These backup to my NAS
  3. I have a separate HDD in an enclosure I plug into the NAS directly and copy my data onto every few months to put in my safe.
  4. Some cloud storage for very important smaller stuff (pictures)

My main question is - what is the best way to copy off new data from my NAS (Synology) to my "cold storage" drive, without recopying everything every time? is there a way to detect the files that exist on both and only copy new ones? I've always had this question when doing backups and it seems to always be overly complex.

You guys are very knowledgeable so I'm sure someone has dealt with this!

21
22
 
 

I'm slowly working my way through deploying Pangolin on a VPS to securely expose some services publicly. I came to wonder a bit about how to approach this VPS security-wise. My homelab runs as a Nomad/Consul/Vault cluster, and it would have been nice to have the VPS as a client node as well, allowing me to spin up and manage the Pangolin components with Nomad jobs. However this means that the VPS would need connectivity to the cluster, essentially a Wireguard connection back to my LAN, this got me thinking.

Should I just forego the entire cluster client idea here and instead see the Pangolin VPS as a completely isolated thing, or is there some secure way to tighten down the connection to my local network with Wireguard? I could for instance restrict the AllowedIPs for the VPS to only be able to reach some specific host for the clustering.

Anyone done anything similar and care to share?

23
 
 

Trading the Pi(geon) for power

24
25
 
 

Hello all, first time with Nextcloud and I must be missing something here. I have Nextcloud running in a docker container and hosting my calendar. I set it up for email reminders for events. The test email sends fine. I got a cron job set up on the host and the notifications worked for a few days, then stopped. The test email still sends just fine. So I look in the admin panel and see that the emails stopped when the background jobs stopped, which for some reason was a few days ago. No idea why, because the cron job was still firing and not giving any errors, but whatever.

So I fixed some stuff and got a host cron job running for the background tasks. Now that part is working again every five minutes as expected, but the email notifications for calendar events did not start sending again. The test email still sends just fine.

What am I missing here?

Edit: well I have no idea what the problem was, I didn't solve it. I deleted the docker container and started from scratch. Fortunately it wasn't that much of a pain because I hadn't been using it for very long. With the first installation I found a few things not to do in the second, and so far the re-install is working beautifully. Lots of things to tweak now.

view more: next ›