Installing Podman on Alpine Linux
Introduction
This is a first post in the High Altitude Water Aerosols series with the aim to get “cloud native” work for internal (web-)services.
Problem statement
To be able to run any sort of container, one needs container runtime.
The way I understand it, there’s a plethora to choose from, with Docker as the OG of containers, and Firecracker1 as probably the best in terms of “container” isolation.
Since docker daemon is bloatware and Firecracker needs rather big upfront
investment to be useful, I’m going with Podman.
Since it claims 1-1 mapping to Docker. And additionally with podman-compose
,
the ~equivalent of docker-compose
.
How to get it running on Alpine with ZFS, then?
Background / requirements
During my initial recon I read Heiner’s Setting up Alpine Linux with Podman.
While it’s interesting that Podman can run rootless, I’m not really fond of the workarounds needed.
So I’ll be going with running “rootful” and managing “security” by forcing the individual containers to run as non-root, whenever possible.
This is IMO reasonable tradeoff, since to run standard services “rootless”, one has to:
Solution
It truly wasn’t that hard:
#!/bin/bash
# Sys upgrade
cat > /etc/apk/repositories <<'EOF'
http://dl-cdn.alpinelinux.org/alpine/v3.18/main
http://dl-cdn.alpinelinux.org/alpine/v3.18/community
#http://dl-cdn.alpinelinux.org/alpine/edge/main
#http://dl-cdn.alpinelinux.org/alpine/edge/community
http://dl-cdn.alpinelinux.org/alpine/edge/testing
# Testing repo is for podman-compose, 2023-09-11
EOF
apk update
apk upgrade
# Podman and podman-compose install (rootful):
apk add podman podman-compose
# Tweak the cgroups, thanks Heiner: https://virtualzone.de/posts/alpine-podman/
if grep -q ^rc_cgroup_mode /etc/rc.conf; then
echo "Warning: rc_cgroup_mode already set in /etc/rc.conf:"
grep ^rc_cgroup_mode /etc/rc.conf
echo "... forcing to 'unified'"
fi
sed -i 's/^#*rc_cgroup_mode=.*/rc_cgroup_mode="unified"/' /etc/rc.conf
rc-update add cgroups && rc-service cgroups start
# Create place for container perma volumes (data) to live in,
# under the /mnt/ssdtank/containers ghetto
zfs create ssdtank/containers
# Create place for container images
zfs create -o mountpoint=legacy ssdtank/containers/.images
# Set storage config to zfs & configure
sed -i 's/^driver = "overlay"/driver = "zfs"/' /etc/containers/storage.conf
cat >>/etc/containers/storage.conf <<'EOF'
# ZFS config FTW -- see note below
[storage.options.zfs]
fsname = "ssdtank/containers/.images"
mountopt = "nodev"
EOF
# Add podman to startup (launches all `restart-policy=always` on boot)
rc-update add podman && rc-service podman start
Note that the zfs
storage driver would normally run wild with sub-filesystems
under the root filesystem, so you would end up with a plethora of
nvmetank/ROOT/alpine/<uuid>
volumes. Hence the [storage.options.zfs]
config
above which pushes them under ssdtank/containers/.images/<uuid>
, and disables
devices on them, for good measure.
And with that, I guess I can proclaim success:
# podman run --rm hello-world
Resolving "hello-world" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/library/hello-world:latest...
Getting image source signatures
Copying blob 719385e32844 done
Copying config 9c7a54a9a4 done
Writing manifest to image destination
Storing signatures
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
First problems
Having played with Podman for a while on Alpine, I’m sad to report that the docker 1:1 support is less than amazing. For example, healthchecks aren’t supported without systemd4.
I mean, podman healthcheck run <container>
will run the configured
health check, and maybe even act on the failure. But the health checks
aren’t being scheduled, which is the most important part.
So, I might need to farmer style this at some point… or switch to some other engine. For now I’m calling this good enough, despite the wart.
Closing words
Next up: reverse http proxy.
-
That’s the “micro vm” engine by Amazon that the cool cats at Fly.io use. ↩
-
Typically ≤ 1024 is only bindable by root, unless something like
sysctl -w net.ipv4.ip_unprivileged_port_start=80
is used. ↩ -
I mean, look at the linked article. And while I’m not above running services via daemontools with runfiles like:
test -x /home/wejn/firefly/run.sh && exec setuidgid wejn /home/wejn/firefly/run.sh
it doesn’t strike me as particularly elegant either. ↩ -
If systemd is [any part of] the solution, I want my problem back. ↩