Forbidden Router: Container Host VM (LanCache/SteamCache + Pihole) and Portainer for management

Intro

This guide is meant to go with The Forbidden Router video series, and this is part 2, building on your XCP-ng config from part 1:

What’s the goal?

To configure SteamCache/Lancache for game caching, PiHole for DNS filtering and to make sure everything is as fast as possible.

It’s always DNS

We have to have a chat about DNS. These things work off of DNS magic, and DNS lookups depend on a hierarchy. We’re adding more lookup steps to DNS which can negatively impact performance, so we need to measure and monitor DNS performance.

This is handy to know by itself, even if you don’t plan to do this, because DNS being wonky can lead to all sorts of problems. If your DNS is sub-par but your network connection is otherwise amazing, it could change how you experience surfing the internet entirely if you move from a slower DNS experience to a faster one.

But I digress…

Setting up the VM under XCP-ng

… we already setup pfSense as or router, now we need a VM to run our containers. I picked AlmaLinux 9, which just came out, to be able to run native Docker. Debian was my second choice, with Ubuntu being my third. CoreOS would have probably been my first pick but some things have been happening around RedHat/CentOS lately.

I connected to xcp-ng via ssh and used wget to save the iso in the iso storage directory for AlmaLinux:

Note: The steam cache will be large. Perhaps terabytes large. I suggest you think about how you do storage.

You can make one huge disk if you don’t want to think about it – 2-3 terabytes? 5 terabytes? Whatever makes sense for your setup. I would recommend setting up a separate disk and setting that disk up in AlmaLinux at /opt or /storitron or something like that, and then we will tell Docker to store the SteamCache volume at that path. Think before doing.

My setup uses about 40tb of cast-off enterprise flash, and I set it up at /storitron so it is separate from the virtual hard disk we’re installing to. Don’t worry if you forgot – it is easy to add a new disk and format it inside this VM later.

Installation is straightforward:

Once that’s done update, and reboot before doing anything

dnf -y update
reboot
dnf install -y yum-utils device-mapper-persistent-data

# This might change from /centos/ to /almalinux/ I've been told? 
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

dnf install docker-ce -y

systemctl start docker && systemctl enable docker

At this point, let’s run the hello-world docker container to see if everything is working fine?

Great, that is working well on AlmaLinux 9. One last thing – I recommend you add “regular user” non-root account to the docker group. Why? So you can do docker cli stuff w/o root. It’s easy:

sudo usermod -aG docker username-goes-here

I used w for my username, so I’d make the last argument of the command w. You might have chosen a different username, which goes there.

If you run ip -4 a you should see any network interfaces that were automatically configured, complete with DHCP IP address. Because of what we’re going to use this for, let’s reconfigure the network with a static IP address on the lan, outside the DHCP range.

In my case 192.168.1.2 is a free IP address, and easy to remember.

nmtui is a command line “gui” for configuring the network. Ofc you can edit files in /etc if you prefer, but this is relatively newb friendly and doesn’t require you to remember anything, other than typing nmtui in at the CLI as root.

After making the change, de-activate and re-activate the connection. (Don’t deactivate the connection via ssh! You’ll be disconnected. You can reboot if in doubt.)

re-run ip -4 a and you should see the new static IP address you set.

Now let’s install Portainer. What is portainer? It’s a gui for managing containers.
( Hey, TrueNAS team, pay attention here :wink: )

# this is a persistent data volume for the container. It survives updates and
# container replacement 
docker volume create portainer_data

# this is the container itself 
docker run -d -p 8000:8000 -p 9443:9443 --name portainer \
    --restart=always \
    -v portainer_data:/data \
    -v /var/run/docker.sock:/var/run/docker.sock \
    portainer/portainer-ce:latest

More info on portainer is in their docs here.

This is the output of running the above command and then docker ps to confirm it is running.

Portainer should now be accessible at https://(ip you set):9443/ (with a self-signed encryption certificate. It is okay to accept it.)

It’ll ask you to set a secure password, then you should be taken here:

Next we need to setup PiHole and LanCache.

In the hierarchy your machine will query Steam, Steam will query PiHole and PiHole will query the fastest DNS server that we were able to find during the DNS diagnostics way back at the start of the video.

We’ll start with PiHole

The best place for PiHole docs is on github.

They even have a docker run script!

The docker run script is simpler than using the portainer gui to setup the container:

Jeez, look at this UI. All just so we can have an equivalent gui for:

docker run -d \
    --name pihole \
    -p 192.168.1.2:53:53/tcp -p 192.168.1.2:53:53/udp \
    -p 192.168.1.2:80:80 \
    -e TZ="America/New_York" \
    -v "${PIHOLE_BASE}/etc-pihole:/etc/pihole" \
    -v "${PIHOLE_BASE}/etc-dnsmasq.d:/etc/dnsmasq.d" \
    --dns=127.0.0.1 --dns=1.1.1.1 \
    --restart=unless-stopped \
    --hostname pi.hole \
    -e VIRTUAL_HOST="pi.hole" \
    -e PROXY_LOCATION="pi.hole" \
    -e ServerIP="127.0.0.1" \
    pihole/pihole:latest

… and yes, unlike some other stuff out there (cough cough TrueNAS) this gui will let you completely configure these CLI options.

… But this Is Stupid. Isn’t there a way to just paste the text and have it parsed? I mean I can just SSH in, paste this command and be done in 12 seconds. If I have to use this GUI, it is going to take 10 minutes.

Some poor soul spent days, weeks maybe, building this UI.

Fortunately, Podman has Stacks, which is very similar to docker compose and you can paste stuff right in. Sanity wins the day for now! (pay attention here again TrueNAS devs…)

… and with the container running, the pihole is accessible at the IP.

Note that you must change the binding from the default to something like this:

     - "192.168.1.2:53:53/tcp"
      - "192.168.1.2:53:53/udp"
      - "192.168.1.2:67:67/udp" # Only required if you are using Pi-hole as your DHCP server
      - "192.168.1.2:80:80/tcp"
 

See how we’re explicitly adding the IP to bind to? Otherwise docker would bind all IPs on this system on port 53, 80, etc. But we’re going to have other stuff running on other IPs on this system.

Note: If you want to bind it to a different IP, you can, in the docker-compose we pasted. It isn’t necessary for the pi-hole, though. It’ll run off of the IP you setup earlier (192.168.1.2 in my case).

Speaking which, we’re out of IPs on our host machine! We used the one IP we setup earlier, so we should setup another IP for the steam cache since it will also want to use ports 53 and port 80.

use nmtui to add another IP. I added 192.168.1.3, but you can add anything on your local network that you like.

Configuring Steamcache/Lancache.

The stacks feature of portainer is awesome. I hope other open source projects copy the e-z copy-paste/use git url approach!

We’ll use that to configure Lancache. From their docs:

This docker container provides a caching proxy server for game download content. For any network with more than one PC gamer in connected this will drastically reduce internet bandwidth consumption.

The primary use case is gaming events, such as LAN parties, which need to be able to cope with hundreds or thousands of computers receiving an unannounced patch - without spending a fortune on internet connectivity. Other uses include smaller networks, such as Internet Cafes and home networks, where the new games are regularly installed on multiple computers; or multiple independent operating systems on the same computer.

This container is designed to support any game that uses HTTP and also supports HTTP range requests (used by Origin). This should make it suitable for:

** Steam (Valve)*
** Origin (EA Games)*
** Riot Games (League of Legends)*
** Battle.net (Hearthstone, Starcraft 2, Overwatch)*
** Frontier Launchpad (Elite Dangerous, Planet Coaster)*
** Uplay (Ubisoft)*
** Windows Updates*

This is the best container to use for all game caching and should be used for Steam in preference to the lancachenet/steamcache and lancachenet/generic containers.

Their quickstart guide is handy, but don’t follow it exactly for our setup here.

Head over to stacks in portainer, create a new stack, and then:

version: '2'
services:
  dns:
    image: lancachenet/lancache-dns:latest
    env_file: .env
#    restart: unless-stopped
    ports:
      - ${DNS_BIND_IP}:53:53/udp
      - ${DNS_BIND_IP}:53:53/tcp

## HTTPS requests are now handled in monolithic directly
## you could choose to return to sniproxy if desired
#
#  sniproxy:
#    image: lancachenet/sniproxy:latest
#    env_file: .env
#    restart: unless-stopped
#    ports:
#      - 443:443/tcp

  monolithic:
    image: lancachenet/monolithic:latest
    env_file: .env
#    restart: unless-stopped
    ports:
      - 192.168.1.3:80:80/tcp
      - 192.168.1.3:443:443/tcp
    volumes:
# setup paths that make sense on your host, this one
# is for mine
      - ${CACHE_ROOT}/cache:/data/cache
      - ${CACHE_ROOT}/logs:/data/logs

But we’re not done, we also need that .env file, that’s where {BLAHBLAH} variables come from:

Here’s the contents. You can hit advanced, then paste this in (or upload the file if you prefer).

NOTE you will have to edit these values to make sense for your setup! How big is the cache? What IPs are you binding it to?

In my case
192.168.1.2 > PiHole IP and host IP
192.168.1.3 > IP for LanCache and LanCache DNS.

The .env file I use:

## See the "Settings" section in README.md for more details

## Set this to true if you're using a load balancer, or set it to false if you're using seperate IPs for each service.
## If you're using monolithic (the default), leave this set to true
USE_GENERIC_CACHE=true

## IP addresses that the lancache monolithic instance is reachable on
## Specify one or more IPs, space separated - these will be used when resolving DNS hostnames through lancachenet-dns. Multiple IPs can improve cache priming performance for some services (e.g. Steam)
## Note: This setting only affects DNS, monolithic and sniproxy will still bind to all IPs by default
LANCACHE_IP=192.168.1.3

## IP address on the host that the DNS server should bind to
DNS_BIND_IP=192.168.1.3

## DNS Resolution for forwarded DNS lookups
UPSTREAM_DNS=192.168.1.2

## Storage path for the cached data
## Note that by default, this will be a folder relative to the docker-compose.yml file
CACHE_ROOT=/storitron

## Change this to customise the size of the disk cache (default 2000000m)
## If you have more storage, you'll likely want to increase this
## The cache server will prune content on a least-recently-used basis if it
## starts approaching this limit.
## Set this to a little bit less than your actual available space 
# 20 tb is wendell's setup, 2tb is 'default' 
CACHE_DISK_SIZE=20000000m

## Change this to allow sufficient index memory for the nginx cache manager (default 500m)
## We recommend 250m of index memory per 1TB of CACHE_DISK_SIZE 
CACHE_INDEX_SIZE=500m

## Change this to limit the maximum age of cached content (default 3650d)
CACHE_MAX_AGE=3650d

## Set the timezone for the docker containers, useful for correct timestamps on logs (default Europe/London)
## Formatted as tz database names. Example: Europe/Oslo or America/Los_Angeles
TZ=America/New_York

Now with this in place we can re-run dnsbench or the DNS benchmark from Steve Gibson and see how we’re doing.

… looks like I’ll be setting 129.250.35.251 and .250 to be the upstream DNS! This graph is a bit misleading because one of the DNS servers was so slow it poisoned the graphing. Just hover the mouse over or re-run the test. It is also sorted as fastest.

Based on our result, for uncached entries, we add about 0.015 time to the lookup request.

Reconfiguring DHCP

With the DNS servers up and working, the final step is to reconfigure DHCP on the LAN. We can reconfigure DHCP to hand out the IP address of the lancache container for DNS, instead of the default.

On pfSense this is very easy – just under the DHCP server settings.

In my case I will set it to 192.168.1.3

The full hierarchy is 192.168.1.3 > 192.168.1.2 > 129.public.dns.server – 3 levels. Previously it was 192.168.1.1 (pfsense, openwrt, or your router) > ISP dns server. So this is one extra DNS “hop”

However, from the old ISP DNS server plus the increased size of the local cache, it is still a net gain overall.

Don’t forget to set the VM in XCP-ng to auto-start:

It is “critical” this VM be up in order for your network to function.

Similarly, make sure all the containers also auto restart, including portainer:

Making Sure It All Works

For Lancache Monolithic, just connect to the console and run the command:

tail -f /data/logs/access.log

EZ web-gui for troubleshooting. Here we can see me downloading DeathLoop and getting mostly cache misses.

If you don’t see anything here, double check your IP addresses are set. You can try manually setting your DNS server to the IP of your lancache machine and then do some game downloading in steam to see if anything shows up.

From there, you can also check PiHole, as it should have some stats as well.

Congratulations! You’re living in the future. :slight_smile:

13 Likes

just curious… why docker-ce over podman?

3 Likes

stay tuned for the video this goes with (I actually do really like podman, but if you’re going to go podman, there are some side effects, and almalinux is probably not the best choice… embrace the free redhat instead…etc)

5 Likes

Ive run centos7 with podman for my steamcache the past 2-3ish years and the other week i switched/migrated over to fedora35 server with podman to run my steamcache.

I’d be curious to know the side effects you saw.

2 Likes

Podman is a 99% fine Docker replacement, it almost works 100% with Portainer the way I’ve got it here, but there seem to be a few minor bugs here and there. I kind of want to do a version of this how-to with the “free” RedHat server + cockpit + portainer, with slightly modified docker-compose files.

This was a bit more newb setup friendly and it is unlikely home user usage will run into the 100 pulls/hour limit of the community edition of Docker. Now that I’ve posted the screenshots you can see how friendly the gui setup of this is.

I am hoping that the ease of use here gives the folks at TrueNAS (and unraid and everywhere) something to compare to/build on because Portainer is really darn good gui management of container fleet.

8 Likes

Ive run my iptables centos7 firewall on a standalone miniITX system for a long time now, b/c I didnt want to take the whole network down, but after I converted my networks to VLANs, I thought about converting the physical machine or making a VM with a similiar configs incase the physical machine goes down.

1 Like

I’ve been running pfSense as a VM on a 4-node VMware cluster for years and it is great. I’ve looked at doing the pfSense HA configuration, but the need for synchronization traffic between the two VMs didn’t seem very appetizing, and the HA provided by the VMware cluster is fairly robust. I can live migrate the VM around to the different hosts without losing any TCP sessions, allowing me to transparently do updates and maintenance on the VM hosts without causing any router interruptions.

I haven’t done any performance testing on the virtualized NIC, but I’m able to run pfSense with just 1 vCPU on a gigabit internet connection no problem. The VMware hosts have 10G connectivity so the interface can technically go higher, but most big local traffic is layer 2.

1 Like

Hi, I’ve been running a pfSense virtual router on ESXi for sometime and had very little issue with doing so. I haven’t passed any hardware nics through only using the virtualisation.

The part of my setup that’s a little more complex is the cabling from the ISP. Over here in NZ we get an ONT (Optical Network Terminal) from the network provider. Typically you would then plug in your router to the ethernet interface on there.

Because I have my home cabled with its own patch panel away from where my servers live I decided to create VLANs on my network and plug the ONT into an untagged port on my switch and trunk that across to my rack switch.

My ESXi then has trunk ports to the switch so I use only virtual switches within ESXi to connect to.

I have 2x 1Gbps between my network panel and server rack currently and then 2x 10Gbps for my ESXi host. I will upgrade eventually when I can get hold of the hardware.

1 Like

I virtualized my pfSense about 2 weeks ago using proxmox, and that felt a lot “easier” than this (maybe a comparison would be of interest?).

I used vBridges and vNICs, but I might get a pcie 4xNIC and passthrough at a later stage for HW offload and set up DMZ inside pfSense.

I have the “router” node added to my cluster so that’s nice with respect to network storage, backups etc over 10G (nfs share provided by truenas scale).

The router node is in a 1U and pulls ±40 W, which I think is ok. CPU is a 1270v5 (3.6GHz). I see ~1% cpu usage “idle” (if there is such a thing for my internet, and ~10-15% cpu at 500 Mbit transfers.

A future project of mine is to set up a separate WAN network using VLANS and a managed switch and have HA in case of HW-failure. But honestly, pfsense on enterprise HW is so much better (solid) the ISP router/fw that I don’t think it will matter with respect to uptime.

1 Like

Kind of wished the stack is on k8s with VyOS with SR-IOV. If VyOS container don’t work, it can be kubevirt VM for pfSense with SR-IOV or even full PCI passthrough. The VM route will also allow bootable fallbacks.

Trying to do this with OKD for a 3 node cluster. But need to build a image to get a 2018 mac mini working on CoreOS first…

1 Like

Love the write-up. I’ve been hosting my Pfsense/OPNsense VM on proxmox for a while now. It’s been almost two years and it’s been good minus the every once and a while updates that screw up the setup. That’s been updates from the router firmware side even. People always say that the hypervisor would get in the way with updates but it’s been mostly the FW updates that have caused issues. This setup has even been working real well with bypassing my fiber modem and spoofing the authentication over Vlan0.

1 Like

Is it possible/practical to do routing/firewall in a container? Either like podman pull nftables or podman pull untangle? That would really make this the forbidden router

2 Likes

oh yeah
@wendell

i dont know if this will speed up DNS any, but this is what i do for my setup, so i dont have steamcache-dns → pihole → external dns

I have my pihole smartly forward the dns requests to the steamcache-dns only if they have to.

on my pihole i have this file.

/etc/dnsmasq.d/99-custom.conf

image

Theres a gui in pihole for it… but you only get a line or two in the webgui, so i just edited the file directly.

Webgui here:

https://pihole.local/admin/settings.php?tab=dns

conditional forwarder at bottom

but after i edit the file.
I just restart the service.

pihole restartdns

5 Likes

Sure thing, this will get you started :wink:

I don’t think it’s something you should do in “production” but for testing and big BGP labs it’s great.

3 Likes

This should improve dns speed by 50 microseconds in my benchmarks

3 Likes

Why Pi-Hole instead of pfBlocker package available in pfSense?

3 Likes

Any recommended blocklists for Pi-Hole?

1 Like

something even more crazier that I do is have a 4 vcpus 8g ram box with dns dhcp TFTP and LB to act as a helper node for my main bare metal okd (because RH don’t support nodes with 48 vcpus and 768G ram)+ kubevirt+ HCI. I run my router on k8s and all other vms I need :slight_smile:

1 Like