Devuan + Podman ... Together at last?

Goals

The goal is to get Podman up and running on Devuan (Debian - systemd) and in a spot to have a fully automated letsencrypt nginx frontend proxy to any number of SSL-hosts behind a single SNI proxy with something similar to the setup described at:
https://hub.docker.com/r/jwilder/nginx-proxy

We are using podman, instead of docker, becausae there are free mirrors that don’t really have pull rate limits and you won’t ever get a sales call from Docker about Gosh All This Traffic Sure Should Cost a lot of money (when really, it’s a manifest file you could cache for your DevOps and not spend the $5k on the Docker fees… anyway… I digress).

Requirements

  • Devuan Chimaera (Older is a no-go).
  • Port 80 and 443 forwarded to your host from your public IP if you intend to allow https traffic to the container(s)
    *Python3-pip
    *update /etc/apt/sources.list to include contrib and non-free
my sources.list if you need it

deb Index of /merged chimaera main contrib non-free
deb-src mirrors.dotsrc.org chimaera main

deb Index of /merged chimaera-security main contrib non-free
deb-src Index of /merged chimaera-security main

chimaera-updates, previously known as ‘volatile’

deb mirrors.dotsrc.org chimaera-updates main contrib non-free
deb-src Index of /merged chimaera-updates main

Apt Requirements

# Podman itself
apt install podman 

# Possibly needed later for some podman helpers that exist in PyPy, 
# esp podman-compose which is super handy. 
apt install python3-pip 

# Podman debian package expects systemd. But there is no systemd. 
# This package is meant to shore that up. 
apt install debian-podman-config-override 

Podman is stateless… but not actually

So podman can run without a unix socket and can run statelessly. This is not good for our use case, though, because we need one container to be able to inspect other containers and the only way to do that is through the unix socket.

The problem is none of the packages with Devuan appropriately re-create the systemd .sock service and podman service so that you can run service --status-all and actually see the podman service.

Not to worry, here is how that is remedied:
Create /etc/init.d/podman

File Contents
#! /bin/sh

### BEGIN INIT INFO
# Provides:             podman
# Required-Start:       $remote_fs $syslog
# Required-Stop:        $remote_fs $syslog
# Default-Start:        2 3 4 5
# Default-Stop:
# Short-Description:    podman container services
### END INIT INFO

LOGGING="--log-level=info"

export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"

. /lib/init/vars.sh
. /lib/lsb/init-functions

do_start() {
        /usr/bin/podman $LOGGING system service --time=0 unix:/run/podman/podman.sock

}

case "$1" in
  start)
        do_start
        ;;
  restart|reload|force-reload)
        echo "Error: argument '$1' not supported" >&2
        exit 3
        ;;
  stop)
        # No-op
        ;;
  status)
        exit 0
        ;;
  *)
        echo "Usage: podman [start|stop]" >&2
        exit 3
        ;;
esac


Verify that service podman status works correctly and that you see /run/podman/podman.sock If not, skip down to troubleshooting or post below for help.

Networking Requirements

At a high level what we are going to do is: 1) Create a container based on nginx that has all all the SSL certs we need to handle inbound traffic to our containers. Nginx acts as a proxy here, and handles multiple SSL on the same IP using SNI. 2) Create a second container that watches for and then requests SSL certificates based on the environment variables in the containers we create later.

We will define the lets encrypt host and virtual host environment variables and this setup will use that to request new SSL certs from letsencrypt should those certs not already be part of the system. It handles renewals transparently in much the same way. Shut down and remove an old container? The SSL will evaporate, eventually.

These containers ultimately are just helpers to handle the SSL end of things for you. You don’t do anything with them other than let them run & proxy your content. In this walkthrough we will be setting up a simple blog which has an apache and mysql component to it as well.

Nuts and Bolts of that Setup

Create the reverse proxy network.

reverse-proxy Containers attaching to this network will have their environment variables read by the nginx proxy/letsencrypt helper to request and apply the ssl cert.

podman network create --driver bridge reverse-proxy

Create the nginx + letsencrypt Containers

make a directory such as nginx-letsencrypt and a certs dir to store the certs. This will be shared between the letsencrypt script container and the nginx proxy container.

mkdir nginx-letsencrypt
mkdir nginx-letsencrypt/certs

Here are the podman commands I use to bring up the nginx proxy companion containers mentioned in the intro of this guide. The first is the letsencrypt renewal bot.

sudo podman run -d \
    --name nginx-letsencrypt \
    --net reverse-proxy \
    --volumes-from nginx-proxy \
    -v ./certs:/etc/nginx/certs:rw \
    -v /var/run/podman/podman.sock:/var/run/docker.sock:ro \
    jrcs/letsencrypt-nginx-proxy-companion

and then this is the actual nginx proxy:

sudo podman run -d -p 80:80 -p 443:443 \
    --name nginx-proxy \
    --net reverse-proxy \
    -v ./certs:/etc/nginx/certs:ro \
    -v /etc/nginx/vhost.d \
    -v /usr/share/nginx/html \
    -v /var/run/podman/podman.sock:/tmp/docker.sock:ro \
    --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true \
    jwilder/nginx-proxy

Inbound traffic goes across the nginx proxy and routes it to the appropriate container inside the reverse-proxy network.

At this point it may be worthwhile to do a few checks to see if everything is working properly.

podman ps -a        # shows a list of all the containers that have been built
podman start (name or hash id)      # start a container 
podman inspect (name or hash id)  |grep IPAddress       # see the internal IP assigned to a container 
podman logs (name or hash id) # see the logs of a container 

It should be possible to load https://localip and http://localip where localip is your local IP address.

You can use this to verify your firewall is working too – your public IP should have port 443 and 80 forwarded to the machine running. podman ps should show the running proxy container is running and that port 80/443 have been mapped to that specific container.

# podman ps
CONTAINER ID  IMAGE                                                    COMMAND               CREATED         STATUS             PORTS                                     NAMES
2900c123e054  docker.io/jwilder/nginx-proxy:latest                     forego start -r       35 minutes ago  Up 35 minutes ago  0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp  nginx-proxy
4e9a8daa6e41  docker.io/jrcs/letsencrypt-nginx-proxy-companion:latest  /bin/bash /app/st...  34 minutes ago  Up 34 minutes ago                                            nginx-letsencrypt

Now The Actual Work

It goes without saying that a dns entry such as blog.foo.dev should be pointing to the public IP where this is. Without good working DNS, letsencrypt is not going to give you a cert.

I recommend creating a new directory such as blog to put the configuration in there. It is possible to use podman-compose, a docker-compose drop-in (mostly) replacement but here is the command line equivalents:

podman create --name=blog_db_1 --network reverse-proxy \
 -l io.podman.compose.config-hash=123 \
 -l io.podman.compose.project=blog \
 -l io.podman.compose.version=0.0.1 \
 -l com.docker.compose.container-number=1 \
 -l com.docker.compose.service=db \
 -e MYSQL_ROOT_PASSWORD=yourpassword \
 -e MYSQL_DATABASE=wordpress \
 -e MYSQL_USER=wordpress \
 -e MYSQL_PASSWORD=otherpassword \
 --mount type=bind,source=./db_data,destination=/var/lib/mysql mysql:5.7

and

podman create --name=blog_wordpress_1 --network reverse-proxy  \
 -l io.podman.compose.config-hash=123 \
 -l io.podman.compose.project=blog \
 -l io.podman.compose.version=0.0.1 \
 -l com.docker.compose.container-number=1 \
 -l com.docker.compose.service=wordpress \
 -e WORDPRESS_DB_HOST=db:3306 \
 -e WORDPRESS_DB_USER=wordpress \
 -e WORDPRESS_DB_PASSWORD=sameasabove \
 -e WORDPRESS_DB_NAME=wordpress \
 -e WEB_DOCUMENT_ROOT=/var/www/html \
 -e [email protected] \
 -e LETSENCRYPT_HOST=real.domain.example.com \
 -e VIRTUAL_HOST=real.domain.example.com \
 --mount type=bind,source=./wordpress_data,destination=/var/www/html \
wordpress:latest

I created a mysql container for the database and am using the wordpress:latest for the blog. In my working directory I have two directories: db_data and wordpress_data . The commands above map those paths inside the containers. That is the mount part of those commands, at the end. The wordpress container mounts ./wordpress_data in the current working directory of host file system to /var/www/html in the container. These paths should be adjusted to match your preferences/working environment! Very important.

ToDo for you: Update the database config to use a fixed IP address. Why? Well, without systemd we don’t quite as easily get automatic hostname mapping for blog_db_1 to the internal IP address. Use docker inspect |grep IPAddress to find the IP address of the database and then updatae wp-config.php in the wordpress volume to use that IP as the host. The username, password, etc. will all be as specified on the podman command. Be sure the containers use the same password. In the DB container you’re setting the password for the mysql root user, the actual database name and the more limitied-privilege database user. In the other command for the wordpress container you are passing a parameter to it in order to specify those values. Of course you can do it from the wp-config php file as well if that is your preference.

Me? I prefer podman-compose which can also provide an environment variable for the IP of the database server as well. But this how-to is done without podman-compose being a hard requirement and as a learning exercise so that one gets a pictue of how the parts fit together.

The environment variables like LETSENCRYPT_HOST are what the other containers use to request the ssl certificate from lets encrypt. It should be updated to your real email address and the real hostname.

Troubleshooting

No registries for containers?

Probably the docker people complained they were in the config file. Here is what I used in /etc/containers/registries.conf

[[registry]]
# In Nov. 2020, Docker rate-limits image pulling.  To avoid hitting these
# limits while testing, always use the google mirror for qualified and
# unqualified `docker.io` images.
# Ref: https://cloud.google.com/container-registry/docs/pulling-cached-images
prefix="docker.io"
location="docker.io"


When bringing up new pods with podman, it would complain that iptables was missing.

locate iptables: exec: "iptables": executable file not found in $PATH

I got this once; fixed by adding /sbin and /usr/sbin to PATH. You should probably do this in the system profile in /etc but I did it with:
declare -x PATH=/sbin:/usr/sbin:$PATH

hi

6 Likes

“Devuian” lul.

2 Likes

Ok, read all of it. It seems pretty hacked together, but if it works, why not. Thanks for the tutorial, Wendell. Not sure what it will take to run podman on a fully s6 (s6-linux-init and s6-rc) system, but I guess this could be a start.

I’m still not that much into OCI containers, because I don’t have a homelab close by to test in (yetTM). It’s a pain to even spin up a VM in my homelab across the pond (and I still do, because I’m in the process of documenting s6 for the layman).

Sorry for the slight off-topic. Again, thanks for the how-to! :smiley:

Nice to see Devuan being promoted, and especially nice to see it applied to something relevant to modern dev.

I got a Linode Nanode during the 1st lockdown specifically to build a project that would spin up containers, but my research left me with docker as the only option, so thanks to your podman effort I now have another option. Because I only have one CPU core & 1Gb available, I chose Alpine as the OS, so Devuan can easily be substituted here (I used Devuan on RPi for years back when its was Jessie).

I did see in the Troubleshooting section that docker can complain about missing iptables. I also found that “external forces” were a very real problem (having lost my original Debian OS to an SSH intrusion) , but being limited to such finite resources I did not want to use iptables, instead opting to create something ultra lite, something that i could also gather from SSH & web server intrusion information.

Will this be a problem, or is iptables build into the base container images, and I can’t replace it because something talks to it?

I was going to ask a question about limiting db access to that wordpress server, but the comment about podman-compose covers that, thanks.

Has this resulting Devuan + Podman image been used on Linode servers? (I know Devuan is not in their default OS list).

Cheers

Paul

1 Like

I believe iptables is required by docker as a dependency, due to how OCI container networking works. iptables is probably used to give access and do the NAT in the OCI containers network. I don’t think you can remove it, but you probably don’t have to enable it on your host directly (I think). I could be wrong though.

Btw, if you use Alpine Wall (awall), the underlying technology is iptables. That’s also true for Uncomplicated Firewall (ufw) and firewalld. TBH, iptables is really potent if you know how to use it, but I’m personally really biased towards pf and would prefer a *BSD as a firewall.

iptables should be fine if you have another layer in front of your iptables, but that’s probably not the case with VPSes and containers.

Not to derail my own thread but have you ever had problems with iptables snat forwarding to a “remote” subnet on the same box? Tap, tun and wireguard don’t seem to work. Socat works fine. same syntax for snat port forwarding on the local subnet works fine. Ip routing otherwise works fine. And socat works fine. Iptables snat port forwarding? Disappears into the ether.

I e.

Public IP 1.2.3.4
Internal 192.168.0.1
Internal gateway to other subnet 192.168.0.254
Subnet behind 192.168.0.254 : 192.168.1.0/24

If internal is anything other than a wired nic or virtual interface (e.g. wireguard) iptables Nat doenst seem to work.

Routing works 100%. If the default route for 192.168.1.0/24 machines goes through 192.168.0.1 to get to the public internet then Nat that way works fine too. Just not port forwarding. Literally 100% fine except port forwarding doesn’t work.

Snat is best practice but I also tried masquerade but nada.

I must be losing my mind, I thought, let’s try it with socat. Socat worked 100% first try.

I can also double port forward first to 192.168.0.254 then on that machine map it to where it needs to go from there but that’s stupid. Special route does exist in the routing table to direct traffic for that subnet to that ip.

1 Like

Didn’t try that. I had managed 2 iptables firewalls (one external, one internal, DMZ between them), but they were pure router / firewall boxes and had all my VPNs on other boxes or VMs (openVPN x2 and a Cisco ASA with a few IPSECs).

Sorry, I can’t help, never had the problem with snat before.

Edit: also, I’m probably not as smart as I may seem to be, I’m but an average user that had more access to production stuff than I should’ve had, but I learned a lot, so fine by me. I enjoyed doing the work of a whole department by myself (together with another colleague), from firewalls, to data center network and infrastructure, to maintaining hosts, VMs, network services (DNS, firewalls, routing, VPNs, email etc.) and some helpdesk here and there. To give a bit more context, I have around 3 years of experience with production Linux environments, so… not the smartest around (by far!).

1 Like

Yeah, all a bit fat for a 1 Core 1Gb VPS that primary task is to build m68k GCC projects from GH via containers, behind a web interface. iptables requires learning as well, too much for someone who does not want to be a DevOp or SysOp or SysAdmin.

This is the knowledge we require: stuff that works, proof that best practice often falls short (and hasn’t been fixed for (only?) god knows how long).

:slight_smile:

Cheers

Paul

PS. you are welcome to hijack your own thread, especially if it relates to the underlying implimentation

1 Like

Ok bsd person, make a new thread for this if the answers good. Multipath tcp.
Here’s what I’d like you to research a bit: what’s it look like setting up a microvm running *bsd with multipath tcp?

I want to do a howto on using two boxes, one local one remote, to do a multipath tcp tunnel between two points. So that every connection then supports mptcp via the tunnel.

I did it manually a while back but now I want to boot it down to something as small as possible and bsd looks like they have the better setup rn but I’m not sure since I don’t have as much xp there

1 Like

I’ll take that as a challenge / homework, I’ll see what I can do. I’ll try to replicate your issue with iptables snat on a local box and try to document it (in case someone needs that and can’t use BSD).

FWIW:
the link I posted resulted from “hey why can’t the upstream firewall be eliminating these threats to my box?”. I don’t feel the need to have to impliment a firewall in order to only protect my box/project/platform (as opposed to eliminating the threats before I get them).

My target is to (eventually) use OWL (OpenWall) on RPi, but to also learn what sort of threats there are, and build default blacklists that can then be installed (for sshd and nginx/web server)

Because of they way Alpine and Openwall work, I dont see an issue using Devuan. And for container usage: Devuan + podman = Alpine + docker = Openwall + Openwall

PS. I look forward to the results of those *BSD iptables & Multipath TCP excercises, cheers

Really great stuff here! Seems very well put together. I didn’t have many issues following along other than file permission things and usual podman setup quirks - some quick google-fu got everything resolved.

This reminds me of a YT tutorial setting up wildcard certs for home lab use that was really good.

Put Wildcard Certificates and SSL on EVERYTHING - YouTube

I really need to sit down one weekend and get SSL certs ironed out for my domain so I can start hosting services from my Proxmox cluster with HTTPs the right way.

Thanks for the tutorial!

Could these scripts etc. also be a solution for getting podman to work on WSL without having to do containers inside containers using the WSL2 Genie?

1 Like

i was going to learn how to migrate over to pogman, but apparently it’s already old hat now. and we should all be moving over to kubernetes for server side stuff like this. HOWEVER… it really is a lot more complicated and i really don’t blame anybody for wanting a much simpler solution. especially while kubernetes is still growing and evolving so much

pogman is good! and so is s6, which i use all the time within my containers (in fact only s6 and nothing else). so they do seem to make for a very good combination to put work together

Having used jwilders letsencrypt/nginx solution a fair bit, I’d have to recommend traefik instead. It can basically do everything the jwilder solution does, including listening for docker containers, but it has a lot of other logging features you’d want for production.
Sure you can adapt the jwilder solution since it’s just nginx, but I’ve found that this gets confusing, at the very latest once someone new needs to figure out how everything is set up.

While I did learn a lot from jwilder’s work, I did find his approach a bit problematic. First of all, the container itself is able to access the Docker socket, which may not be insecure in this case, but then again it feels like we are handing out prison keys to the inmates and expecting them to behave.

More importantly, it goes against one of the unspoken principles behind containerization/virtualization – “Your app should be oblivious to the fact that it is containerized/virtualized”. Maybe we should treat containers as individual machines with their own networking stack, storage, etc, and deploy apps on them as we would on a vanilla VPS.

In this particular use case, couldn’t we just pass the public interface’s port 80 and 443 into an Nginx container, and issue certificates that way? I did something similar a while back but I wasn’t happy with the results. It just involved installing certbot and Nginx on the Host OS and backend websites (I used GhostCMS) inside containers.