I’m moving to a new machine soon and want to re-evaluate some security practices while I’m doing it. My current server is debian with all apps containerized in docker with root. I’d like to harden some stuff, especially vaultwarden but I’m concerned about transitioning to podman while using complex docker setups like nextcloud-aio. Do you have experience hardening your containers by switching? Is it worth it? How long is a piece of string?
I’m running podman and podman-compose with no problem. And I’m happy. At first I was confused by the uid and gid mapping the containers have, but you’ll get used to it.
This are some notes I took, please don’t take all of it for the right choice.
Podman-Stuff
https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md
storage.conf
To use the fuse-overlay driver, the storage must be configured:
.config/containers/storage.conf
[storage] driver = "overlay" runroot = "/run/user/1000" graphroot = "/home/<user>/.local/share/containers/storage" [storage.options] mount_program = "/usr/bin/fuse-overlayfs"
Lingering (running services without login / after logout)
https://github.com/containers/podman/issues/12001
https://unix.stackexchange.com/questions/462845/how-to-apply-lingering-immedeately#462867
sudo loginctl enable-linger <user>
Do you need to set lingering for all container users you set up? Does it restart all services in your compose files without issue?
Yes all users that have containers running, that should keep running need lingering.
The Services do not restart themself. I have cronjob that executes
podman start --all
at reboot for my “podman user”.
I use podman almost exclusively at this point. I like having the rootless containers and secrets management. If you’re on Debian, though, I strongly suggest pulling podman from Trixie. The version in Bookworm is very out of date and there’s been a lot of fixes since then.
I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.
Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.
I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.
2¢
I make extensive use of compose in my own server so I’m assuming I’ll need to transition to systemd confs. Do you run those or do you run everything by podman CLI?
Yeah, I use systemd for the self-host stuff, but you should be able to use docker-compose files with podman-compose with no, or only minor, changes. Theoretically. If you’re comfortable with compose, you may have more luck. I didn’t have a lot of experience with docker-compose, and so when there’s hiccups I tend to just give up and do it manually, because it works just fine that way, too, and it’s easier (for me).
I switched and was very glad to do so. You increase your security and so far I haven’t seen any downside. Every container I’ve tried has worked without issues, even complex ones.
Was this with podman or rootless docker?
I also would like to switch to rootless, I have some experience with podman and, while I generally like it, it’s not 100% compatible with (rootful) docker, and can have performance issues if you’re not careful, especiallt with certain file systems like btrfs. I wonder if rootless docker is now better than podman, or preferred for some other reason.
Rootless Podman :) It requires you to learn a little bit of new syntax, for example, the way you mount volumes and pass environment variables can be slightly different, but there’s nothing that hasn’t worked for me.
I’m using this on uBlue uCore, which I would also strongly recommend for security reasons.
Can you expand on why you chose uCore? I was considering CoreOS until just now
and the idea of setting up ignition config serving seems overkill for running only one server at home.ignition is still required the same way as CoreOSMainly for security. I was originally looking at CoreOS but I liked the additional improvements by the UBlue team. Since I only want it to run containers, it is a huge security benefit to be immutable and designed specifically for that workflow.
The Ignition file is super easy to do, even for just one server (substitute
docker
forpodman
depending which you have):Take a copy of the UCore butane file:
https://github.com/ublue-os/ucore/blob/main/examples/ucore-autorebase.butane
Update it with your SSH public key and a password hash by using this command:
# Get a password hash podman run -ti --rm quay.io/coreos/mkpasswd --method=yescrypt
Then host the butane file in a temporary local webserver:
# Convert Butane file to Ignition file podman run -i --rm quay.io/coreos/butane:release --pretty --strict < ucore-autorebase.butane > ignition.ign # Serve the Igition file using a temp webserver podman run -p 5080:80 -v "$PWD":/var/www/html php:7.2-apache
During UCore setup, type in the address of the hosted file, e.g.
http://your_ip_addr:5080/ignition.ign
That’s it - UCore configures everything else during setup.___
I’m very much biased towards Podman, but from what I understand rootless Docker is a bit of an afterthought, while Podman has been developed from the ground up with rootless in mind. That should be reason enough.
The very few things Docker can do that Podman struggles a bit with are stuff that usually involves mounting the Docker socket in the container or other stupid things. Since you care about security, you wouldn’t do that anyway. Not to mention there’s also rootful Podman, when you need that level of access.
I’d recommend an RPM-based distro with Podman, the few times I’ve tried Podman on a deb distro, there’s always been something wonky. It’s been a while, though.
Podman actually run fine on Debian 12. Though the packaged version is a bit old. Does not support podman compose command. Though podman-compose works.
podman-compose is packaged in a separate
podman-compose
package in Debian 12 (did not try it though). The only thing missing (for me) in Debian 12 is quadlets support (requires podman 4.4+, Debian 12 has 4.3)No its on 4.x I think.
You are right. Quadlets require 4.4, Debian 12 has 4.3
Thanks. Last time I tried it was just after bookworm released, and on ARM, so it has probably got better
Does Podman work well when you have multiple rootless containers that you want to communicate securely in a least-privilege configuration (each container only has access to what it needs)? That is the one thing I couldn’t figure out how to do well with Podman.
Do you mean networking between them? There’s two ways of networking between containers. One of them is to create a custom network for a set of containers that you want to connect between each other. Then you can access other containers in that network using their name and port number like so
container_name:1234
Note: DNS is disabled in the default network by default so you can’t access other containers by their name if using it. You need to create a new network for it to work.
Another way is to group them together with a pod. Then you can access other services in that same pod using localhost like so
localhost:1234
Personally in my current setup I’m using both pods and seperate networks for each of them. The reason is I use traefik and I don’t want all of my containers in a single network along with traefik. So I just made a seperate network for each of my pods and give traefik access to that network. As an example here’s my komga setup:
I have komga and komf running in a single pod with a network called komga assigned to the pod. So now I can communicate between komga and komf using localhost. I also added traefik to the komga network so that I can reverse proxy my komga instance.
Yes, you can easily do that. Set the container name and make them on the same network. Used caddy and whole bunch of Selfhostable services with it and I reverse proxy as
container_name:port
I’m thinking about an immutable OS with podman support first and foremost. Would you recommend Fedora CoreOS?
It’s a really solid combo, but if you’re not familiar with CoreOS I wouldn’t change both at once. Meaning migrate the services to Podman first, then switch the OS. I’ve meant to switch from Alma 9 to CoreOS a long time, but haven’t found the time.
I noticed you run Nextcloud AIO, just so you know, that’s one of those “mount the docker socket” monstrosities. I’d look into switching to the community NC image and separate containers managed yourself. AIO is easy, but if someone gets shell to the NC container, it’s basically giving root to your host.
Either way, you’re going to have trouble running AIO with Podman.
Hey bigdickdonkey, I recently tried and wasn’t able to shit my way through podman, there just wasn’t enough chatter and guides about it. I plan to revisit it when Debian 13 comes out, which will include podman quadlets. I also tried to get podman quadlets to work on Ubuntu 24 and got closer, but still didn’t manage and Ubuntu is squicky.
I read about true user rootless Docker and decided that was too finicky to keep up to date. It needs some annoying stuff to update, from what I could tell. I was planning on many users having their own containers, and that would have gotten annoying to manage. Maybe a single user would be an OK burden.
The podman people make a good argument for running podman as root and using userns to divvy out UIDs to achieve rootless https://www.redhat.com/en/blog/rootless-podman-user-namespace-modes but since podman is on the back burner till there’s more community and Debian 13, I applied that idea to Docker.
So I went with root Docker with the goals of:
- read only
- set user to different UID:GID for each container
- silo containers in individual Docker networks
- nothing gets /var/run/docker.sock
- cap_drop: all
- security-opt=no-new-privileges
- volumes all get tagged with :rw,noexec,nosuid,nodev,Z
Basically it’s the security best practices from this list https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
This still has risk of the Docker daemon being hacked from the container itself somehow, which podman eliminates, but it’s as close to the podman ideal I can get within my knowledge now.
Most things will run as rootless+read-only+cap_drop with minor messing. Automatic ripping machine would not, but that project is a wild ride if required permissions. Everything else has succumbed, but I’ve needed to sometimes have a “pre launch container” to do permission changes or make somewhere like /opt writable.
I would transition one app stack at a time to the best security practices, and it’s easier since you don’t need to change container managers. Hope this helps!
Quartets are a great idea but I found them very annoying in real life and ended switching back to docker.
Sad to hear for my quadlet future, do you remember what things were specifically annoying?
Podman not because of security but because of quadlets (systemd integration). Makes setting up and managing container services a breeze.
Yeah quadlets are pretty cool. I have them organized into folders for each pod.
podman auto-update
is also another pretty nice feature. I don’t use the systemd timer for auto-update. Instead I just dopodman auto-update --dry-run
to check for updates and update my quadlet files and configs if any changes are required then I run the updates withpodman auto-update
.
One of the main reason I switched to podman was its compatibility with firewalld. Haven’t used rootless docker, but podman and podman-compose gets the job done for me.
Any reason why you use compose and not quadlets?
Do you run anything like fail2ban with that compatibility?
Not sure if it makes a difference and not quite your question but I’ve just switched away from nextcloud-aio to just having my own docker compose, so I have better control and know what’s going on more. I always found it funny and when installing on a new VPS decided to try. It was surprisingly straightforward and Ive been able to install everything I need.
Let me know if my docker compose would help. I still need to add the backup solution but it’s going to be straightforward as well.
I would love to see your compose file. I already have to run special steps on my nextcloud-aio to use it with a reverse proxy so I’m interested in moving away from it.
Hopefully this works and you can see the compose file. I’ve put a few things in [square brackets] to hide some stuff, probably overly cautiously. I have an external network linked to NPM and in that, I use nextcloud-server for IP address and 80 for the port (it’s the inside container port, not 8080 on the system - that took me a long time to figure out!). Add a .env file with everything referenced in the compose file, then (hopefully!) Away you go
Thanks for sharing this! It also took me a while to understand the difference between the Expose dockerfile command and the --publish cli command
Podman
- rootless by default
- daemonless
- integration with systemd, made even easier by
podman-generate-systemd
- no third-party APT repository required, follows the same lifecycle as my LTS (Debian) distro
podman
anddocker
command-line are 100% compatible for my use cases
podman-generate-systemd
is outdated. The currently supported way to run podman containers using systemd services would be Quadlet files.https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html
Edit: I just saw that you use debian so idk if Quadlets are a thing with the podman version on debian.
I haven’t switched products but I did go through a process of hardening my containers to a degree. I did find that the hardening is limited by the authors of the software and if they have built their apps with security in mind.
I have always used docker-compose I found that easier to see what needed to be tweaked.
Some helpful links
https://docs.docker.com/docker-hub/vulnerability-scanning/
https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
Can’t help with much of this but I think I can resolve the last question for you, since I don’t see anyone else trying.
How long is a piece of string?
Exactly twice as long as half of its length.
As long as you’re on Linux, podman is superior and will do all of the things you’re asking about. If you need to also support Windows or Mac, Docker is the only thing that will work (although people have told me Rancher isn’t bad now for a couple of years).
podman works on windows hosts, as long as you don’t need windows containers
And as long as you don’t need simple access to most features such as volumes. The podman implementation on not Linux leaves quite a bit to be desired for anyone trying to do more than just run a binary wrapped in a container. I’m not throwing shade because it’s FOSS and anything is better than Docker. Only Docker will work for a production-capable dev environment on not Linux unless podman’s development has exponentially increased in the last year since I tried to move a shop to podman on not Linux.
I’m actually in the process of switching over from docker to podman. Its definitely a learning curve, ie. Setting up systems integration etc.
I do love it but its been a bit of a pain, for instance I’m still trying to figure out the errors I’m getting when trying to deploy my matrix and vikunja containers, I’m getting permission errors and can’t find dB errors. I k ow little about podman right now but I would definitely recommend it since it is open source and runs rootless by deafult.
I switched from Dockerd to K3s. First you get the benefits of the Kubernetes API but also Pod Security Context, Pod Security Admission and Network Policies which help to reduce attack surface while simplifying your setup. But if you do want to use Podman look into running your containers as read only, drop all capabilities and unprivileged.