Goals
The goal is to get Podman up and running on Devuan (Debian - systemd) and in a spot to have a fully automated letsencrypt nginx frontend proxy to any number of SSL-hosts behind a single SNI proxy with something similar to the setup described at:
https://hub.docker.com/r/jwilder/nginx-proxy
We are using podman, instead of docker, becausae there are free mirrors that don’t really have pull rate limits and you won’t ever get a sales call from Docker about Gosh All This Traffic Sure Should Cost a lot of money (when really, it’s a manifest file you could cache for your DevOps and not spend the $5k on the Docker fees… anyway… I digress).
Requirements
- Devuan Chimaera (Older is a no-go).
- Port 80 and 443 forwarded to your host from your public IP if you intend to allow https traffic to the container(s)
*Python3-pip
*update /etc/apt/sources.list to include contrib and non-free
my sources.list if you need it
deb Index of /merged chimaera main contrib non-free
deb-src mirrors.dotsrc.org chimaera main
deb Index of /merged chimaera-security main contrib non-free
deb-src Index of /merged chimaera-security main
chimaera-updates, previously known as ‘volatile’
deb mirrors.dotsrc.org chimaera-updates main contrib non-free
deb-src Index of /merged chimaera-updates main
Apt Requirements
# Podman itself
apt install podman
# Possibly needed later for some podman helpers that exist in PyPy,
# esp podman-compose which is super handy.
apt install python3-pip
# Podman debian package expects systemd. But there is no systemd.
# This package is meant to shore that up.
apt install debian-podman-config-override
Podman is stateless… but not actually
So podman can run without a unix socket and can run statelessly. This is not good for our use case, though, because we need one container to be able to inspect other containers and the only way to do that is through the unix socket.
The problem is none of the packages with Devuan appropriately re-create the systemd .sock service and podman service so that you can run service --status-all
and actually see the podman service.
Not to worry, here is how that is remedied:
Create /etc/init.d/podman
File Contents
#! /bin/sh
### BEGIN INIT INFO
# Provides: podman
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: podman container services
### END INIT INFO
LOGGING="--log-level=info"
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
. /lib/init/vars.sh
. /lib/lsb/init-functions
do_start() {
/usr/bin/podman $LOGGING system service --time=0 unix:/run/podman/podman.sock
}
case "$1" in
start)
do_start
;;
restart|reload|force-reload)
echo "Error: argument '$1' not supported" >&2
exit 3
;;
stop)
# No-op
;;
status)
exit 0
;;
*)
echo "Usage: podman [start|stop]" >&2
exit 3
;;
esac
Verify that service podman status
works correctly and that you see /run/podman/podman.sock
If not, skip down to troubleshooting or post below for help.
Networking Requirements
At a high level what we are going to do is: 1) Create a container based on nginx that has all all the SSL certs we need to handle inbound traffic to our containers. Nginx acts as a proxy here, and handles multiple SSL on the same IP using SNI. 2) Create a second container that watches for and then requests SSL certificates based on the environment variables in the containers we create later.
We will define the lets encrypt host and virtual host environment variables and this setup will use that to request new SSL certs from letsencrypt should those certs not already be part of the system. It handles renewals transparently in much the same way. Shut down and remove an old container? The SSL will evaporate, eventually.
These containers ultimately are just helpers to handle the SSL end of things for you. You don’t do anything with them other than let them run & proxy your content. In this walkthrough we will be setting up a simple blog which has an apache and mysql component to it as well.
Nuts and Bolts of that Setup
Create the reverse proxy network.
reverse-proxy
Containers attaching to this network will have their environment variables read by the nginx proxy/letsencrypt helper to request and apply the ssl cert.
podman network create --driver bridge reverse-proxy
Create the nginx + letsencrypt Containers
make a directory such as nginx-letsencrypt and a certs dir to store the certs. This will be shared between the letsencrypt script container and the nginx proxy container.
mkdir nginx-letsencrypt
mkdir nginx-letsencrypt/certs
Here are the podman commands I use to bring up the nginx proxy companion containers mentioned in the intro of this guide. The first is the letsencrypt renewal bot.
sudo podman run -d \
--name nginx-letsencrypt \
--net reverse-proxy \
--volumes-from nginx-proxy \
-v ./certs:/etc/nginx/certs:rw \
-v /var/run/podman/podman.sock:/var/run/docker.sock:ro \
jrcs/letsencrypt-nginx-proxy-companion
and then this is the actual nginx proxy:
sudo podman run -d -p 80:80 -p 443:443 \
--name nginx-proxy \
--net reverse-proxy \
-v ./certs:/etc/nginx/certs:ro \
-v /etc/nginx/vhost.d \
-v /usr/share/nginx/html \
-v /var/run/podman/podman.sock:/tmp/docker.sock:ro \
--label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true \
jwilder/nginx-proxy
Inbound traffic goes across the nginx proxy and routes it to the appropriate container inside the reverse-proxy network.
At this point it may be worthwhile to do a few checks to see if everything is working properly.
podman ps -a # shows a list of all the containers that have been built
podman start (name or hash id) # start a container
podman inspect (name or hash id) |grep IPAddress # see the internal IP assigned to a container
podman logs (name or hash id) # see the logs of a container
It should be possible to load https://localip and http://localip where localip
is your local IP address.
You can use this to verify your firewall is working too – your public IP should have port 443 and 80 forwarded to the machine running. podman ps
should show the running proxy container is running and that port 80/443 have been mapped to that specific container.
# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2900c123e054 docker.io/jwilder/nginx-proxy:latest forego start -r 35 minutes ago Up 35 minutes ago 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy
4e9a8daa6e41 docker.io/jrcs/letsencrypt-nginx-proxy-companion:latest /bin/bash /app/st... 34 minutes ago Up 34 minutes ago nginx-letsencrypt
Now The Actual Work
It goes without saying that a dns entry such as blog.foo.dev should be pointing to the public IP where this is. Without good working DNS, letsencrypt is not going to give you a cert.
I recommend creating a new directory such as blog
to put the configuration in there. It is possible to use podman-compose, a docker-compose drop-in (mostly) replacement but here is the command line equivalents:
podman create --name=blog_db_1 --network reverse-proxy \
-l io.podman.compose.config-hash=123 \
-l io.podman.compose.project=blog \
-l io.podman.compose.version=0.0.1 \
-l com.docker.compose.container-number=1 \
-l com.docker.compose.service=db \
-e MYSQL_ROOT_PASSWORD=yourpassword \
-e MYSQL_DATABASE=wordpress \
-e MYSQL_USER=wordpress \
-e MYSQL_PASSWORD=otherpassword \
--mount type=bind,source=./db_data,destination=/var/lib/mysql mysql:5.7
and
podman create --name=blog_wordpress_1 --network reverse-proxy \
-l io.podman.compose.config-hash=123 \
-l io.podman.compose.project=blog \
-l io.podman.compose.version=0.0.1 \
-l com.docker.compose.container-number=1 \
-l com.docker.compose.service=wordpress \
-e WORDPRESS_DB_HOST=db:3306 \
-e WORDPRESS_DB_USER=wordpress \
-e WORDPRESS_DB_PASSWORD=sameasabove \
-e WORDPRESS_DB_NAME=wordpress \
-e WEB_DOCUMENT_ROOT=/var/www/html \
-e [email protected] \
-e LETSENCRYPT_HOST=real.domain.example.com \
-e VIRTUAL_HOST=real.domain.example.com \
--mount type=bind,source=./wordpress_data,destination=/var/www/html \
wordpress:latest
I created a mysql container for the database and am using the wordpress:latest for the blog. In my working directory I have two directories: db_data
and wordpress_data
. The commands above map those paths inside the containers. That is the mount part of those commands, at the end. The wordpress container mounts ./wordpress_data
in the current working directory of host file system to /var/www/html
in the container. These paths should be adjusted to match your preferences/working environment! Very important.
ToDo for you: Update the database config to use a fixed IP address. Why? Well, without systemd we don’t quite as easily get automatic hostname mapping for blog_db_1 to the internal IP address. Use docker inspect |grep IPAddress
to find the IP address of the database and then updatae wp-config.php in the wordpress volume to use that IP as the host. The username, password, etc. will all be as specified on the podman command. Be sure the containers use the same password. In the DB container you’re setting the password for the mysql root user, the actual database name and the more limitied-privilege database user. In the other command for the wordpress container you are passing a parameter to it in order to specify those values. Of course you can do it from the wp-config php file as well if that is your preference.
Me? I prefer podman-compose which can also provide an environment variable for the IP of the database server as well. But this how-to is done without podman-compose being a hard requirement and as a learning exercise so that one gets a pictue of how the parts fit together.
The environment variables like LETSENCRYPT_HOST
are what the other containers use to request the ssl certificate from lets encrypt. It should be updated to your real email address and the real hostname.
Troubleshooting
No registries for containers?
Probably the docker people complained they were in the config file. Here is what I used in /etc/containers/registries.conf
[[registry]]
# In Nov. 2020, Docker rate-limits image pulling. To avoid hitting these
# limits while testing, always use the google mirror for qualified and
# unqualified `docker.io` images.
# Ref: https://cloud.google.com/container-registry/docs/pulling-cached-images
prefix="docker.io"
location="docker.io"
When bringing up new pods with podman, it would complain that iptables was missing.
locate iptables: exec: "iptables": executable file not found in $PATH
I got this once; fixed by adding /sbin and /usr/sbin to PATH. You should probably do this in the system profile in /etc but I did it with:
declare -x PATH=/sbin:/usr/sbin:$PATH
hi