Installing Podman on Alpine Linux
This is a first post in the High Altitude Water Aerosols series with the aim to get “cloud native” work for internal (web-)services.
To be able to run any sort of container, one needs container runtime.
Since docker daemon is bloatware and Firecracker needs rather big upfront
investment to be useful, I’m going with Podman.
Since it claims 1-1 mapping to Docker. And additionally with
the ~equivalent of
How to get it running on Alpine with ZFS, then?
Background / requirements
During my initial recon I read Heiner’s Setting up Alpine Linux with Podman.
While it’s interesting that Podman can run rootless, I’m not really fond of the workarounds needed.
So I’ll be going with running “rootful” and managing “security” by forcing the individual containers to run as non-root, whenever possible.
This is IMO reasonable tradeoff, since to run standard services “rootless”, one has to:
It truly wasn’t that hard:
#!/bin/bash # Sys upgrade cat > /etc/apk/repositories <<'EOF' http://dl-cdn.alpinelinux.org/alpine/v3.18/main http://dl-cdn.alpinelinux.org/alpine/v3.18/community #http://dl-cdn.alpinelinux.org/alpine/edge/main #http://dl-cdn.alpinelinux.org/alpine/edge/community http://dl-cdn.alpinelinux.org/alpine/edge/testing # Testing repo is for podman-compose, 2023-09-11 EOF apk update apk upgrade # Podman and podman-compose install (rootful): apk add podman podman-compose # Tweak the cgroups, thanks Heiner: https://virtualzone.de/posts/alpine-podman/ if grep -q ^rc_cgroup_mode /etc/rc.conf; then echo "Warning: rc_cgroup_mode already set in /etc/rc.conf:" grep ^rc_cgroup_mode /etc/rc.conf echo "... forcing to 'unified'" fi sed -i 's/^#*rc_cgroup_mode=.*/rc_cgroup_mode="unified"/' /etc/rc.conf rc-update add cgroups && rc-service cgroups start # Create place for container perma volumes (data) to live in, # under the /mnt/ssdtank/containers ghetto zfs create ssdtank/containers # Create place for container images zfs create -o mountpoint=legacy ssdtank/containers/.images # Set storage config to zfs & configure sed -i 's/^driver = "overlay"/driver = "zfs"/' /etc/containers/storage.conf cat >>/etc/containers/storage.conf <<'EOF' # ZFS config FTW -- see note below [storage.options.zfs] fsname = "ssdtank/containers/.images" mountopt = "nodev" EOF # Add podman to startup (launches all `restart-policy=always` on boot) rc-update add podman && rc-service podman start
Note that the
zfs storage driver would normally run wild with sub-filesystems
under the root filesystem, so you would end up with a plethora of
nvmetank/ROOT/alpine/<uuid> volumes. Hence the
above which pushes them under
ssdtank/containers/.images/<uuid>, and disables
devices on them, for good measure.
And with that, I guess I can proclaim success:
# podman run --rm hello-world Resolving "hello-world" using unqualified-search registries (/etc/containers/registries.conf) Trying to pull docker.io/library/hello-world:latest... Getting image source signatures Copying blob 719385e32844 done Copying config 9c7a54a9a4 done Writing manifest to image destination Storing signatures Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
podman healthcheck run <container> will run the configured
health check, and maybe even act on the failure. But the health checks
aren’t being scheduled, which is the most important part.
So, I might need to farmer style this at some point… or switch to some other engine. For now I’m calling this good enough, despite the wart.
Next up: reverse http proxy.
Typically ≤ 1024 is only bindable by root, unless something like
sysctl -w net.ipv4.ip_unprivileged_port_start=80is used. ↩
I mean, look at the linked article. And while I’m not above running services via daemontools with runfiles like:
test -x /home/wejn/firefly/run.sh && exec setuidgid wejn /home/wejn/firefly/run.shit doesn’t strike me as particularly elegant either. ↩