Rebuilding containers is trivial if they supply the dockerfile. Then the base image is up to date, and you can add any updates/patches for things like the recent react vuln.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
rn I'm only using docker for the services I have behind a VPN, so I don't really put that much thought into securing them. If I had any publicly accessible ones I would setup an automatic patch or even build my custom images.
And as always I'm trying to up my security game, but not at any cost
I didn't realise this was a problem.
I'm not too worried about it though.
each container has such a small attack surface. As in, my reverse proxy traefik exposes port 80 and port 443, and all the others only expose their API's or webservers to traefik.
No. I only have a limited amount of time for maintaining my home infrastructure. I chose my battles.
No
Rebuild: no. If the software itself is unmaintained, it gets replaced.
Patch: yes. If the base image contains vulnerabilities that can be fixed with a package update, then that gets applied. The patch size and side effects can be minimized by using copacetic, which can ingest Trivy scan results to identify vulnerabilities.
There's also repos like Chainguard and Docker hardened images which are handy for getting up to date images of commonly used tools.
I don’t think a year old base is bad. Unless there’s an absolutely devastating CVE in something like the network stack or a particular shared library, any vulnerabilities in it will probably be just privilege escalations that wouldn’t have any effect unless you were allowing people shell access to the container. Obviously, the application itself can have a vulnerability, but that would be the case regardless of base image.
I did it only once (yet) because i needed a specific addon for the software.
In my case, I wanted to use caddy webserver with a specific plugin. It was quite easy to create a new image exactly the way i wanted it.
If you care about security you build it is own. No need to trust random dude in the internet. After all It just fire and forget. Copy whatever "code" is used to build container you are after, verify it once and than just rebuild it periodically to pull patches from more reliable sources.
Docker security is a joke, no need to make it worse.
All the time. There's a lot of cves in old premade docker containers.
I don't know enough about code to verify things myself. And I assume this applies for a lot of us here. So I just pray that nothing's fucked in the distribution chain.
I'm also in this category, but OP is talking about something else.
Like if you use container-x, which has an alpine base. If it hasn't released a new version in several years then you're using a several year old alpine distro.
I didn't really realise this was a thing.
Ah, I have no idea what that is. I thought OP meant building stuff directly from Github (e.g. Ungoogled Chromium). Thanks for the clarification! :)
Containers have layers. So if you create an instance of a syncthing container whoever built that container would have started with some other container. Alpine linux is a very popular base layer, just used as an example in this discussion.
When you download an image, all the layers underlying the application that you actually wanted, will only be as fresh as the last time the maintainer built that image. So if there were a bug in the alpine base, that might have been fixed in alpine, but wouldn't by pushed through to whatever you downloaded.
Yes, because I mostly like to have my services built in a Debian container inside my Proxmox environment. If I'm running it in Docker, there's a good chance it's temporary/PoC, and in that case I do not rebuild or anything, I run it for whatever purpose it serves and then it either goes away or gets migrated to a handcrafted Debian container.
I have a repo for some home automation, where some hardware specific modules are required. But it's becoming rarer since more software handle it in runtime.
Too much work.

Not yet but I plan to. Just haven't gotten around to setting it all up yet.
I've never rebuilt a container, but I also don't have any containers that are deprecated status either. I swap off to alternatives when a project hits deprecation or abandonware status.
My only deprecated container I currently have is filebrowser, I'm still seeking alternatives and have been for awhile now but strangely enough it doesn't seem there are many web UI file management containers.
As such though ever since I learned that the project was ~~abandoned~~ on life support(the maintainer has said they are doing security patches only, and that while they are doing more on the project currently, that could change), the container remains off, only activating it when i need to use it.
File browser Quantum is quite the popular replacement, if you don’t need any of the things it hasn’t implemented yet, and especially if you enjoy all the new things it can do!