this post was submitted on 25 Apr 2026
24 points (96.2% liked)

Selfhosted

56958 readers
698 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

After trying out Cosmos Cloud (and it not working for the clients), I'm back at square one again. I was going to install Docker Desktop, but I see it warns that it runs on a VM. Will this be a problem when trying to remote connect to certain services, like Mealie or Jellyfin?

top 27 comments
sorted by: hot top controversial new old
[–] GreenKnight23@lemmy.world 0 points 50 minutes ago

docker runs native on Linux. you run docker desktop on windows and Mac because they don't have Linux runtimes that can run docker.

docker desktop would be useless for Linux.

learn the command line scrub. better now than never.

[–] folekaule@lemmy.world 6 points 2 hours ago

I don't see anyone addressing the question from the post: whether it is a problem that Docker Desktop on Linux runs in a separate VM.

The page says:

Docker Desktop on Linux runs a Virtual Machine (VM) which creates and uses a custom docker context, desktop-linux, on startup. This means images and containers deployed on the Linux Docker Engine (before installation) are not available in Docker Desktop for Linux.

To expand on what that means: If you install Docker as usual (the CLI) on Linux, it runs as a process (running as root). The process will isolate the container processes from the rest of the system using Linux kernel features, but you're really just running processes on your host kernel that have limited access to the file system, network, etc.

When you run in it a separate VM, which is how Docker Desktop is also run on Windows and MacOS, you are running it in a separate Linux instance (VM) that cannot communicate with the outside by default. So, if you're running Docker on the host computer and inside a VM, those are separate Docker installs and can't talk to each other. That is what the warning is about.

You can absolutely expose the VM to the outside, the same as if you ran it on Windows. Docker will let you expose those ports and it handles the messy bits of the networking for you. You just have to tell Docker when you run the container (on the command line or in a docker compose file) which ports to expose. By default, nothing is exposed. To do that you can use the -p option. For example:

docker run --rm -it -p 8080:80 httpd

Will run an instance of Apache HTTPd and expose it on port 8080. The container itself listens to port 80, but on the outside it's 8080. If you then hit http://localhost:8080/ you should see "It works!".

A note on Docker networking: from within the container, localhost is referring to the container itself, not the host. So if you try to do e.g. curl http://localhost:8080/ inside the container, your connection would be refused.

Docker Desktop is often frowned upon because you have to pay to use it in a commercial setting (there was some backlash because it used to be free), it's quite expensive, and they require a minimum license count for enterprise licenses (I know because we bought one at work). So, I suggest exploring free alternatives like Podman Desktop. However, note that they do not always have feature parity with Docker Desktop.

I like Docker Desktop because it gives me a nice dashboard to see all my containers, resource usage, etc. I would not have requested it for work, though, if it weren't for my IDE (Visual Studio) requiring it at the time (they have added Podman support since).

Final note: I recommend just diving into using Docker from the command line and learn that. Docker complicates networking a little bit because it adds more layers, but understanding Docker is very useful if you're into self hosting or software development.

[–] GottaHaveFaith@fedia.io 1 points 2 hours ago

docker desktop is just a GUI for managing docker, is there a specific reason you need it? Anyway you can try Portainer

[–] reluctant_squidd@lemmy.ca 24 points 6 hours ago (1 children)
[–] anonfopyapper@lemmy.world 0 points 6 hours ago (1 children)

No its not.

While podman fully OCI image compliant, the network stack of it is different. And podman runsias user, not as root.

Not to mention that podman is a CLI, but OP asked for GUI

[–] JustJack23@slrpnk.net 23 points 6 hours ago (2 children)
[–] irmadlad@lemmy.world 4 points 3 hours ago (2 children)

That's interesting. I didn't know Podman had a Windows environment desktop app.

[–] devfuuu@lemmy.world 4 points 3 hours ago

Also works good in macos. Been using it instead of the lima/colima stack for a few weeks now.

[–] JustJack23@slrpnk.net 3 points 3 hours ago

Tbf Idk how well it works on windows, but on Linux and Mac I have had no problems with it.

[–] Hezaethos@piefed.zip 7 points 5 hours ago

Ok, this is interesting 🙂 they also have a learning center thing it says, so maybe I can take the classes/lessons/tutorials they mention.

I just really hope I then figure out the remote connection stuff. That's the one I'm most paranoid about and wanting to figure out

[–] foggy@lemmy.world 11 points 6 hours ago (1 children)

Docker containers are isolated by default... nothing on the outside can reach them unless you say so. You open the door with a port mapping

In your compose file:

ports:

  • '8096:8096'

Read this as HOST:CONTAINER. It says: "when something hits my server on port 8096, forward it to port 8096 inside the Jellyfin container."

So once it's running, you go to http://your-server-ip:8096/ in a browser and you're talking to Jellyfin. The container is still isolated you've just opened one specific door to access it.

[–] Hezaethos@piefed.zip 3 points 6 hours ago* (last edited 5 hours ago) (3 children)

Wouldn't this be insecure? Is that what the reverse proxy thing is for - to keep it safe?

Also, is it possible to make it so the name is simpler? I bought a domain name just in case.

Is there a place I can learn about ports and networking more? Something like Khan Academy but for networking?

[–] foggy@lemmy.world 22 points 5 hours ago* (last edited 5 hours ago)

To your first questions, well need to untangle a few thoughts wrapped into that.

Right now, http://your-server-ip:8096/ is plain HTTP. On your home network that's usually fine. Over the internet, you'd want HTTPS so passwords and stream data aren't sent in the clear.

Just opening a port on Docker only exposes it to your local network. It's not on the public internet unless you also forward the port on your router. So by default, only devices on your network can reach it.

From there is it secure? That's on Jellyfin. Using strong passwords and trusting Jellyfin is secure is as good as you can do here.

The reverse proxy is where you handle the https, and where you go from domain.com:8096 to jelly.domain.com

If using caddy for example, that looks like this somewhere in your caddyfile

jellyfin.yourdomain.com {

reverse_proxy localhost:8096

}

Your reverse proxy sits in front of your containers and routes traffic by hostname. You tell it: "when someone visits jellyfin.yourdomain.com, send them to the Jellyfin container on port 8096."

The other pieces you'll need:

DNS pointing jellyfin.yourdomain.com at your server's IP (public IP if accessing from outside, local IP if just at home) A TLS certificate so HTTPS works, Let's Encrypt is free, and Caddy gets them automatically with zero config (Nginx and Traefik can too, just more setup).

Also rputer port forwarding on 80 and 443 to make it accessible outside the house.

The last part there is the actual risky part. When you put a service on the open internet, bots from everywhere will find it instantly and begin running scripts to try to find a way in. With only this setup, again, the insecurity is Jellyfin. When an exploit drops, you need to be updated ASAP to stay on top of your security.

There are tons of ways to make this more secure. The easiest way would be tailscale/(wireguard). You basically install tailscale on every device that will connect to your server instead of opening your routers port. It keeps your device off the open internet but allows devices all on its tailscale VPN connect to it with the domain you setup.

You can achieve something similar using cloudflare tunnels. Your server runs a daemon that reaches out to cloudflare and it's served to the internet that way, friends access via normal URL, no extra download required.

Lastly the best but cost prohibitive option is to do it through a VPS. A virtual private server does what cloudflare does, basically, but you control it. If the money is not prohibitive, I strongly recommend this. When cloudflare goes down again (and it will go down again), you won't be beholden to their infrastructure being online to access your server.

Happy to clarify anything here. I wrote this response in 3 parts and rereading it it feels a little disjointed lol

[–] foggy@lemmy.world 6 points 4 hours ago (1 children)

Ah sneaky. You added a question.

The answer to is there somewhere you can learn about this? Yes and no. You will ultimately learn by doing for this stuff.

Comptia network+ study guides will have all this knowledge and more.

If you're all in, Hack The Box is a freemium platform (think codecademy but less hand-holdy) that isn't designed to teach you this, but will absolutely teach you this in the process. It is a platform for offensive and defensive cybersecurity. These things are covered as afterthoughts in bigger pictures, but it will (at least for folks who learn by doing) force you to familiarize yourself with it implicitly.

Otherwise as far as IPs and ports and containers, I can tell you all you need to know, because it ain't much. It feels confusing/overwhelming at first but everything individual slice of this stuff is pretty simple. It's just an absurd amount of knowledge. Just take baby steps and learn what you need to know to get done what you seek.

[–] foggy@lemmy.world 7 points 4 hours ago (1 children)

I didn't have too much coffee, you had too much coffee.

IP address: a machine's address on a network. Like a street address.

Port: a numbered door on that machine. The IP gets you to the building; the port gets you to the right room. Different programs listen on different ports.

DNS: the phonebook. Maps friendly names like example.com to IPs so you don't have to memorize numbers.

Router: the doorman between your home and the internet. Stuff inside can reach out; nothing gets in unless you tell it to.

Container: a sandboxed mini-computer running on your machine. Isolated by default. You map a host port to a container port to let traffic in.

Reverse proxy: a switchboard. One program that takes all incoming traffic and routes it to the right service based on the hostname.

[–] foggy@lemmy.world 6 points 4 hours ago (2 children)

Welcome to foggy's IP, ports, and containers lesson, take a shot of espresso, we're going in!

special IP addresses:

127.0.0.1 - "This same machine." Talking to yourself. Also written as localhost.

192.168.x.x - private home network range. What your router hands out to your devices. Not routable on the internet. 10.x.x.x - another private range. Bigger, used by businesses and some routers. Same idea as 192.168.

172.16.x.x to 172.31.x.x - the third private range. Docker likes this one for its internal container networks.

0.0.0.0 - "all interfaces" or "any address." When a service binds to this, it means "listen on every network this machine is connected to." Also sometimes means "no specific address" depending on context.

255.255.255.255 - brosdcast. "Everyone on this network." Rarely something you'll type, but you'll see it.

169.254.x.x - link-local. What your machine assigns itself when it wanted a DHCP address from the router but didn't get one. If you see this, something's wrong with your network.


Port talk:

Ports 0-1023: well-known ports. Reserved for standard services. On Linux you need root to bind to these. The ones you'll actually see:

  • 22: SSH (remote terminal access)
  • 53: DNS
  • 80: HTTP (unencrypted web)
  • 443: HTTPS (encrypted web)
  • 25, 465, 587: email sending (SMTP and variants)
  • 143, 993: email reading (IMAP)

Ports 1024-49151: registered ports. Assigned to specific apps by convention. A sampling:

  • 3306: MySQL/MariaDB
  • 5432: PostgreSQL
  • 6379: Redis
  • 8080: common "alternate HTTP" port, used when 80 is taken
  • 8096: Jellyfin
  • 32400: Plex
  • 27017: MongoDB

Nothing enforces these: they're just conventions. You could run Jellyfin on port 7777 if you wanted.

Ports 49152–65535: ephemeral ports. A neato part:

When you connect to a servers port 443, for example, your machine connects to the server's port 443, but it also needs a port on your end for the server to send replies back to. Your OS grabs a random unused port from this high range, uses it for that one connection, and releases it when done. Thus, 'ephemeral'


Containers? Sure:

A container is a program packaged in a bubble. It's basically a VM without the machine part. Let's say you wanna run Jellyfin AND Plex. Let's say tomorrow there's a brand new video file format and Jellyfin supports it and Plex doesn't. Jellyfin needs to use some new version of ffmpeg that Plex cannot use. The solution? Containers.

Each program is containered with what it needs to run happily. Nothing more. Your machine does the rest.

[–] jupiter@mastodon.gamedev.place 1 points 6 minutes ago

@foggy I never thought ephemeral ports were still a thing. How do I increase this range, e.g. on a machine expecting to make a lot of connections?

[–] Hezaethos@piefed.zip 1 points 1 hour ago

You should be a teacher. You made me go from despising Networking to interested in learning it more

[–] JustJack23@slrpnk.net 4 points 5 hours ago

https://training.linuxfoundation.org/networking/

It seems they don't have anything on networking exactly, but maybe some of their stuff on container orchestration can be helpful https://training.linuxfoundation.org/full-catalog/?_sft_product_type=training&_sft_topic_area=cloud-containers

About the domains and reverse proxies, if you are testing or on local network reverse proxy or domain are not needed. If you want to access it outside of your network they make more sense.

But also for my services I use https://tailscale.com/ and that way I avoid dealing with domain and reverse proxies by instead just connecting to my local network remotely.

[–] Decronym@lemmy.decronym.xyz 5 points 5 hours ago* (last edited 5 minutes ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network
DNS Domain Name Service/System
HTTP Hypertext Transfer Protocol, the Web
HTTPS HTTP over SSL
IMAP Internet Message Access Protocol for email
IP Internet Protocol
Plex Brand of media server package
SMTP Simple Mail Transfer Protocol
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
TLS Transport Layer Security, supersedes SSL
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)

[Thread #254 for this comm, first seen 25th Apr 2026, 11:10] [FAQ] [Full list] [Contact] [Source code]

[–] vk6flab@lemmy.radio 6 points 6 hours ago (2 children)

Why run Docker Desktop when it's installable as a cli service?

What are you actually trying to achieve?

[–] Hezaethos@piefed.zip 4 points 6 hours ago (1 children)

ease of use.

I'm a noob at networking.

[–] osanna@lemmy.vg 3 points 5 hours ago (1 children)

there's only one way to get better at it. by doing it.

[–] twinnie@feddit.uk 3 points 5 hours ago

Or if it’s not something that’s valuable to you just do it the easy way.

[–] djdarren@piefed.social 3 points 6 hours ago (1 children)

As a Mac user who's migrated over to Linux over the past year or so, I've got an idea of where OP is coming from.

Docker on macOS is accessed via a Desktop GUI, so you can easily see what you have installed, how it's running, etc... So when I shifted over to Linux, I was thrown off by there being no such tool. I wasn't used to using a terminal to do everything, and grumbled quite a lot about there being no Docker Desktop GUI, given how many self-hostable services run through Docker.

I've since gotten used to it, but it really is quite jarring.

[–] Mordikan@kbin.earth 1 points 41 minutes ago

There are a lot of Docker GUI tools out there. There just isn't Docker Desktop. Here are a few:

  1. Portainer
  2. Podman Desktop
  3. Yacht (pretty sure this is unmaintained currently but still should work)
[–] slazer2au@lemmy.world 5 points 6 hours ago

You can run a Portainer container to manage your containers