Forbidden Router: Container Host VM (LanCache/SteamCache + Pihole) and Portainer for management

I just wanted to share my experience, while I started with my home lab server years ago I’ve always tried to keep stuff simple in case of failure.
My first server was an AMD Athlon 2200+ with 256MB of ram and a crappy ESC Elite Group MB with SIS chipset! What a pain!

During lockdown I was bored… so my plan to upgrade my infrastructure started taking place! Since then I had: My NAS and VM host (HP Microserver), Router (pfsense machine), SFP ONT, and its Media converter…

Back then my ISP started to have some issues and I was looking for an IPv6-capable one. pfSense always had issues with PPPoE even with mitigation so also priority went to an ISP with IPoE instead of PPPoE as most of the ISP over here do.

I’ve made my plan to upgrade my HW, I needed something to replace my P55+i5-750 router and virtualize it. And then use my HP Microserver as an offline backup solution.

I’ve got some drives, an HBA controller and I had an Intel NIC laying around. In that period I’ve seen some YT videos about cheap china Xeon MB and CPUs.
They had a lot of PCIe lanes! I’ve then scrapped the idea of the no-brand china MB due to ECC and power on loss factors.
I went with an Asus X99 WS IPMI. I really wanted the IPMI since the machine will be located in the mechanical room. Accessing it is pure pain! But I had my pain too with that MB!

I’ve been quite lucky I’ve got the MB from Poland, the CPU from Germany, and the RAM from who remembers? It was a bare minimum of 16GB DDR4 ECC ram!

I have recycled my PSU from the P55 build, and the chassis… oh the chassis I needed one that was capable of “EATX”! I didn’t want to buy 40USD of fans so I settled for Antec P101; It has a lot of drive cages, space for 2.5 SSDs, and space for EATX MBs!
It was also cheap back then!

I’ve patiently waited for all the parts to arrive, POST service is so slow compared to Amazon 1 day delivery! I’ve then assembled everything!
It has posted successfully but the BMC was protected by a password! That would only be my first small issue with that crappy BMC!

Once updated the BMC, and the BIOS and tested the HW I’ve configured the software. I’m using Proxmox. Yes, I know not the best solution but it is my personal choice. I had issues with XEN back on LGA771 days since then I’ve used Proxmox.

So HW my specs are:

  • Intel E2620v3 (30EUR Shipped)
  • Asus X99-WS IPMI (250EUR shipped)
  • RAM 2x8GB 2R ECC HMA41GR7MFR8N (30EUR + Shippment)
  • Noctua NH-D9L (Brand new from Amazon)
  • Antec P101 Silent (100EUR shipped with 3 fans included)
  • Dell PERC H200 (HBA FW, VFIO to OMV VM about 25EUR)
  • 4 x WD Red 3TB
  • WD ~~Black/Gold/~~Enterprise 3TB (it is failing)
  • WD Purple 3TB
  • Crucial P2 500GB NVME (Added later for Linux VM)
  • Kingston 120GB SSD as boot drive (Coming from the P55)
  • PNY 250Gb SSD for Docker (unused for now, added later to increase docker volume space)
  • Nvidia GF119 GT-X20 no idea which rebraind it is (Windows VM)
  • Intel PRO/1000 (To be replaced by Intel I340)

The SW requirements was:

  • Router VM with VFIO of the NICs, only Virtual NICs for VMs/containers
  • VoIP PBX (FreePBX)
  • Docker containers (Reverse proxy, Plex, Mailu, etc)
  • NAS (OpenMediaVault)
  • Local/Remote Windows VM just in case I need something.
    Added later:
  • Ubuntu/PopOS VM for development (Crucial NVME as boot drive via VFIO)

The Windows VM
I have used VFIO to pass the USB controller so the USB3.0 on the back and some on the front panel are available to the VM. Also, the HD audio and the GPU are directly connected!. Unfortunately on resume from standby the VM dies. (I’ve never discovered why, but I barely use that machine)

The network setup
3 NICs are in LACP to the managed switch and one is the uplink to the ONT.

The switch is a cheap GS1900-24, most of its ports are used by the Ethernet sockets I have around the house.

Due to the issues with my ISP I’ve migrated to the new one about 1 year ago. The new one has IPoE and IPv6 but IPv4aaS (MAP-T). So I cannot use pfSense anymore. I’m using OpenWRT in a VM and it is really fast. (OpenWRT is the only solution so far that implements MAP-T)
I’ve used OpenWRT for years on my router, AP, etc. I also ported some cheap repeaters too. It never failed me but I was using pfSense since a lot of people use it and I wanted to try.
Since its first “install” I never had any issue, but I’m still trying to figure out how to configure LACP.

On my network, I have 2 VLANs one for the public stuff (services), and the other for my LAN traffic. PUBLIC cannot connect to LAN but LAN can connect to PUBLIC. Both can connect to the WAN. Docker uses MCVLANs to connect to the internet/local networks.

I use Portainer to manage my containers but I’m thinking to migrate to k8s/k3s.
My docker host doesn’t have a lot of storage (I’m using the LVM driver for docker for volumes and using a tiny LVM group on the boot drive) and I would love to move to a GlusterFS setup using my NAS VM to store the volumes.

All the public services have data going through Traefik and to protect my services I’m using Crowdsec (with various collections); the bouncer is running in the Opewrt VM.

So far my only big issue with the setup itself is the power-on sequence for the VMs and the Docker containers. When I have to power cycle the whole machine I have to do it manually. the only VM with auto-boot is the router and the PBX.

Other issues? I could write a book about the issues I had with the BMC!
I even looked for leaked schematics of the MB online hoping to find out the pinout of the ASPEED chip to port OpenBMC to it! But no luck!







EDIT: typos, added pictures, added prices

3 Likes