this post was submitted on 07 Dec 2025
101 points (93.2% liked)

Selfhosted

53506 readers
500 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
101
Docker security (lemmy.zip)
submitted 1 day ago* (last edited 1 day ago) by jobbies@lemmy.zip to c/selfhosted@lemmy.world
 

You're probably already aware of this, but if you run Docker on linux and use ufw or firewalld - it will bypass all your firewall rules. It doesn't matter what your defaults are or how strict you are about opening ports; Docker has free reign to send and receive from the host as it pleases.

If you are good at manipulating iptables there is a way around this, but it also affects outgoing traffic and could interfere with the bridge. Unless you're a pointy head with a fetish for iptables this will be a world of pain, so isn't really a solution.

There is a tool called ufw-docker that mitigates this by manipulating iptables for you. I was happy with this as a solution and it used to work well on my rig, but for some unknown reason its no-longer working and Docker is back to doing its own thing.

Am I missing an obvious solution here?

It seems odd for a popular tool like Docker - that is also used by enterprise - not to have a pain-free way around this.

top 49 comments
sorted by: hot top controversial new old
[–] BCsven@lemmy.ca 1 points 4 hours ago

Not sure about the distro being used, opensuse makes a docker zone to put docker interfaces on, those have their own ports and rules separate then the Ethernet assigned zone ports/services to allow. For me I had the opposite issue, I couldn't reach my docker containers from my lan, onky from the local machine because the Ethernet was on an internal zone and Docker was on its own zone. I'm not a superskilled networker dude so I just turned on forwarding and masquerade so the incoming LAN zone would forward to doocker zone and pretend to be the local machine connecting and not a LAN or remote IP. I guess if you moved your dockers too the public zone you could get in trouble

[–] MangoPenguin@lemmy.blahaj.zone 10 points 11 hours ago* (last edited 11 hours ago) (1 children)

It doesnt actually bypass the firewall.

When you tell docker to expose a port on 0.0.0.0 its just doing what you ask of it.

[–] mlg@lemmy.world 14 points 18 hours ago* (last edited 18 hours ago)

How I sleep knowing Fedora + podman actually uses safe firewalld zones out of box instead of expecting the user to hack around with the clown show that is ufw.

I could be wrong here but I feel like the answer is in the docs itself:

If you are running Docker with the iptables or ip6tables options set to true, and firewalld is enabled on your system, in addition to its usual iptables or nftables rules, Docker creates a firewalld zone called docker, with target ACCEPT.

All bridge network interfaces created by Docker (for example, docker0) are inserted into the docker zone.

Docker also creates a forwarding policy called docker-forwarding that allows forwarding from ANY zone to the docker zone.

Modify the zone to your security needs? Or does Docker reset the zone rules ever startup? If this is the same as podman, the docker zone should actually accept traffic from your public zone which has your physical NIC, which would mean you don't have to do anything since public default is to DROP.

[–] irmadlad@lemmy.world 2 points 13 hours ago* (last edited 12 hours ago)

So, this discussion has intrigued me and some good points have been brought up by seemingly knowledgeable network engineers of which I am not. If I may, introduce you guys to my network to see if there are points I can improve on.

For simplicity, the network diagram would be: modem---->stand alone pfsense firewall with a tailscale overlay, running Suricata, pfblockerng, vlans to segment server traffic from normal traffic, & a very robust rule set & ntopng for traffic analysis -----> server & devices. Server is piped through Cloudflare Tunnel/Zero Trust. On the server, I run UFW, fail2ban with a hair trigger & Crowdsec. Also, since I am the only user, I lock everything down in the .host Allow/Deny & use ssh keys. Users cause complexities and complexities turn into issues. All devices are running a VPN. I do run Docker in lieu of Podman. Server has been hardened through various means and to an extent in line with Lynis.

I've been told that this is overengineered, but it seems to work just jammy. Knock on wood, I've never had a breach on my local network, though there is always the possibility. A long time ago, when I stood my first server up on a VPS, it got hacked almost immediately. So I dropped back and did some studying, but I am no network engineer.

Anyways, for the experts here, my question is: What would you do to improve, harden, rip out, redo, add etc?

ETA: Server also has a tailscale overlay.

[–] melmi@lemmy.blahaj.zone 8 points 21 hours ago* (last edited 21 hours ago)

If there's a port you want accessible from the host/other containers but not beyond the host, consider using the expose directive instead of ports. As an added bonus, you don't need to come up with arbitrary ports to assign on the host for every container with a shared port.

IMO it's more intuitive to connect to a service via container_name:443 instead of localhost:8443

[–] fizzle@quokk.au 5 points 19 hours ago

I basically just avoid exposing ports from containers unless I really do want them exposed on the host?

Most services go through my reverse proxy, traefik.

Things like databases don't publish ports on the host because they're only accessed internally, using their container name.

[–] mhzawadi@lemmy.horwood.cloud 34 points 1 day ago (4 children)

Docker by default will bind exposed ports to all IPs, but you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1

I do this with things that should go down my VPN only

https://docs.docker.com/reference/compose-file/services/#ports

[–] dan@upvote.au 25 points 1 day ago* (last edited 1 day ago) (1 children)

you can override this by setting an IP on the port exposed so thet a local only server is only accessable on 127.0.0.1

Also, if the Docker container only has to be accessed from another Docker container, you don't need to expose a port at all. Docker containers can reach other Docker containers in the same compose stack by hostname.

[–] tofu@lemmy.nocturnal.garden 10 points 22 hours ago

Also works cross stack if you assign the containers the same network.

[–] jobbies@lemmy.zip 3 points 1 day ago (2 children)

That might do the trick. Would you mind giving an example?

[–] tux7350@lemmy.world 12 points 1 day ago (1 children)

Something like this. This is a compose.yml that only allows ips from the local host 8080 to connect to the container port 80.

services:
  webapp:
    image: nginx:latest
    container_name: local_nginx
    ports:
      - "127.0.0.1:8080:80"
[–] jobbies@lemmy.zip 1 points 1 day ago (1 children)

Ahh. Then route it through the firewall/pass it to a reverse proxy?

[–] tux7350@lemmy.world 6 points 1 day ago (1 children)

Well if your reverse proxy is also inside of a container, you dont need to expose the port at all. As long as the containers are in the same docker network then they can communicate.

If your reverse proxy is not inside a docker container, then yes this method would work to prevent clients from connecting to a docker container.

[–] jobbies@lemmy.zip 1 points 1 day ago (1 children)

Thanks, given me something to think about.

[–] tux7350@lemmy.world 5 points 1 day ago

Course, feel free to DM if you have questions.

This is a common setup. Have a firewall block all traffic. Use docker to punch a hole through the firewall and expose only 443 to the reverse proxy. Now any container can be routed through the reverse proxy as long as the container is on the same docker network.

If you define no network, the containers are put into a default bridge network, use docker inspect to see the container ips.

Here is an example of how to define a custom docker network called "proxy_net" and statically set each container ip.

networks:
  proxy_net:
    driver: bridge
    ipam:
      config:
        - subnet: 172.28.0.0/16

services:
  app1:
    image: nginx:latest
    container_name: app1
    networks:
      proxy_net:
        ipv4_address: 172.28.0.10
    ports:
      - "8080:80"

  whoami:
    image: containous/whoami:latest
    container_name: whoami
    networks:
      proxy_net:
        ipv4_address: 172.28.0.11

Notice how "who am I" is not exposed at all. The nginx container can now serve the whoami container with the proper config, pointing at 172.28.0.11.

[–] themachine@lemmy.world 9 points 1 day ago

Instead of 8080:8080 port mapping you do 127.0.0.1:8080:8080

[–] jobbies@lemmy.zip 1 points 1 day ago (1 children)

That might do the trick. Would you mind giving an example?

[–] mhzawadi@lemmy.horwood.cloud 8 points 22 hours ago (1 children)

sure, you can see below that port 53 is only on a secondary IP I have on my docker host.

***
services:
  pihole01:
    image: pihole/pihole:latest
    container_name: pihole01
    ports:
      - "8180:80/tcp"
      - "9443:443/tcp"
      - "192.168.1.156:53:53/tcp" # this will only bind to that IP
      - "192.168.1.156:53:53/udp" # this will only bind to that IP
      - "192.168.1.156:67:67/udp" # this will only bind to that IP
    environment:
      TZ: 'Europe/London'
      FTLCONF_webserver_api_password: 'mysecurepassword'
      FTLCONF_dns_listeningMode: 'all'
    dns:
      - '127.0.0.1'
      - '192.168.1.1'
    restart: unless-stopped
    labels:
        - "traefik.http.routers.pihole_primary.rule=Host(`dns01.example.com`)"
        - "traefik.http.routers.pihole_primary.service=pihole_primary"
        - "traefik.http.services.pihole_primary.loadbalancer.server.port=80"
[–] jobbies@lemmy.zip 2 points 21 hours ago

Thanks, I'm embarrassed that I didn't know about this already 😅

[–] bjoern_tantau@swg-empire.de -1 points 1 day ago

Yeah, leaving unwanted ports open is a configuration problem. A firewall gives you just the opportunity to fuck up twice.

[–] bizdelnick@lemmy.ml 17 points 1 day ago

I've read the article you pointed to. What is written there and what you wrote here are absolutely different things. Docker does integrate with firewalld and creates a zone. Have you tried configuring filters for that zone? Ufw is just too dumb because it is suited for workstations that do not forward packets at all, so it cannot be integrated with docker by design.

[–] illusionist@lemmy.zip 19 points 1 day ago (1 children)

I use podman that doesn't suffer from that problem

[–] BlueBockser@programming.dev 9 points 1 day ago

+1 for Podman. I've found rootful Podman Quadlets to be a very nice alternative to Docker Compose, especially if you're using systemd anyway for timers, services, etc.

[–] davad@lemmy.world 10 points 1 day ago* (last edited 1 day ago) (2 children)

In an enterprise setting, you shouldn't trust the server firewall. You lock that down with your network equipment.

Edit: sorry, I failed to read the whole post 🤦‍♂️. I don't have a good answer for you. When I used docker in my homelab, I exposed services using labels and a traefik container similar to this: https://docs.docker.com/guides/traefik/#using-traefik-with-docker

That doesn't protect you from accidentally exposing ports, but it helps make it more obvious when it happens.

[–] jobbies@lemmy.zip 11 points 1 day ago

In an enterprise setting, you shouldn't trust the server firewall. You lock that down with your network equipment.

I thought someone might say this, but it doesn't seem very zero-trust?

Ideally you'd still want the host to be as secure as humanly possible?

Yes, but having both in place can help mitigate lateral movement risk.

[–] ryokimball@infosec.pub 5 points 1 day ago* (last edited 1 day ago) (1 children)

I use podman instead, though I'm honestly not certain this "fixes" the problem you described. I assume it does purely on the no-root point.

Agreeing with the other poster, network tools and not relying on the server itself is the professional fix

[–] Overspark@piefed.social 8 points 1 day ago

Podman explicitly supports firewalls and does not bypass them like docker does, no matter whether you're using root mode or not. So IMHO that is the more professional solution.

[–] gerowen@piefed.social 3 points 1 day ago (2 children)

I just host everything on bare metal and use systemd to lock down/containerize things as necessary, even adding my own custom drop-ins for software that ships its own systemd service file. SystemD is way more powerful than people often realize.

[–] atzanteol@sh.itjust.works 0 points 19 hours ago

Containers run "on bare metal" just as much as non-containerized applications.

[–] prettybunnys@piefed.social 1 points 1 day ago (2 children)

When you say you’re using systems to lock down/containerize things as necessary can you explain what you mean?

[–] moonpiedumplings@programming.dev 7 points 1 day ago* (last edited 1 day ago)

I don't know what the commenter you replied to is talking about, but systemd has it's own firewalling and sandboxing capabilities. They probably mean that they don't use docker for deployment of services at all.

Here is a blogpost about systemd's firewall capabilities: https://www.ctrl.blog/entry/systemd-application-firewall.html

Here is a blogpost about systemd's sandboxing: https://www.redhat.com/en/blog/mastering-systemd

Here is the archwiki's docs about drop in units: https://wiki.archlinux.org/title/Systemd#Drop-in_files

I can understand why someone would like this, but this seems like a lot to learn and configure, whereas podman/docker deny most capabilities and network permissions by default.

[–] gerowen@piefed.social 2 points 23 hours ago

Systemd has all sorts of options. If a service has certain sandbox settings applied such as private /tmp, private /proc, restricting access to certain folders or devices, restricting available system calls or whatever, then systemd creates a chroot in /proc/PID for that process with all your settings applied and the process runs inside that chroot.

I've found it a little easier than managing a full blown container or VM, at least for the things I host for myself.

If a piece of software provides its own service file that isn't as restricted as you'd like, you can use systemctl edit to add additional options of your choosing to a "drop-in" file that gets loaded and applied at runtime so you don't have to worry about a package update overwriting any changes you make.

And you can even get ideas for settings to apply to a service to increase security with:

systemd-analyze security SERVICENAME

[–] HybridSarcasm@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

I would vote for the firewalld integration.

[–] phoenixz@lemmy.ca 2 points 1 day ago

I've had similar issues using CSF firewall. They just pushed out updates that apparently support docker a little better but I still have to fight with that to get that working, I don't know if that will fix it, but give it a try

[–] dan@upvote.au 0 points 1 day ago

If you are good at manipulating iptables there is a way around this

Modern systems shouldn't be using iptables any more.