For a number of years, I’ve been using a router based on CentOS 7 to separate most wired devices on my network from all of the wireless devices. The router connected to my cable modem was an UnTangle instance running on Qotom hardware, to which I connected to wireless access points, a switch for devices such as streaming boxes, blu-ray players, televisions and such, and my CentOS 7 computer. The hardware is fairly old, but I keep it around because it supports a decent number of PCIe lanes and would be fairly expensive to replace. I don’t get great speed out of it. Even if copying from NVMe to NVMe, I normally don’t get any better than 5Gbps. An iperf3 test shows my bandwidth is about 28Gbps. Still, it’s better than gigabit, but I may have been better off just sticking to 10Gbps direct-attach-copper. Behind it, I’m using two desktop PCs and two UnRaid servers, each with a 40Gbps link.
MSI X99S Gaming 7
Intel® Core™ i7-7820X (40 PCIe lanes)
16GB of memory (currently)
OS on a 2565GB Samsung NVMe (soon switching to sata, for reasons)
Two dual-port 40Gbps Mellanox ConnectX-3 cards
One 4-port Intel gigabit card
Soon adding a dual-port 10Gbps Mellanox ConnectX-3 card
My problems started when I switched from cable to fiber. After the switch, replacing my Qotom hardware with a Calix Linspire U6 from Allo, I lost the ability to SSH into my DNS servers that are on the WAN side of my CentOS router. Nothing changed on CentOS at all, so there must be something different about the Calix security settings that are preventing the connectivity. I had to set up port forwarding in the CentOS firewall to allow me to access the web interface of the Calix router and likely have to do the same thing to be able to get to my DNS servers and services, but whatever.
My next challenge was from trying to add a dual port 2.5G Intel network adapter in the CentOS router. While the hardware is detected, the OS doesn’t see it as a network interface. From my understanding, it’s a kernel limitation, and yeah, I need to update to a newer OS anyway. That’s where I start to question things. If I use an NVMe SSD on the motherboard, I lose access to the lowest x16 slot on the motherboard because the two devices use the same lanes, so I took the existing NVMe out and put in a SATA M.2 and performed an install of Oracle Linux 9. That process kind of worked, but has been pretty painful so far. The OS doesn’t use any of the configuration files in /etc/sysconfig/network-scripts, an instead seems to do everything through NetworkManager, which seems odd because when I kickstart an OL9 server at work, it seems to use the configuration files without much fuss, but I’m not typically doing anything as complicated as routing for my job. That computer has access to the Internet just fine and connectivity between the bridged devices on the LAN side also work fine, but I couldn’t get it to route properly. Forwarding is enabled, but I don’t know if I’m missing something in firewalld to make it work or if I’m missing a route to make it work properly, or maybe dnsmasq needs to be set up to get things to work properly. After several hours of mucking around, I decided it was enough for the weekend and put the old NVMe back in to get back to a known stable condition.
Has pfSense or OPNsense matured enough in the last few years to support 40gbps ConnectX-3 cards? Would I get better throughput using one of those two, or should I try using UnTangle again? I looked for some tutorials but didn’t find anything that was quite what I was looking for. I’m sure that CentOS or Oracle Linux aren’t the best choices for handling router duties but I’m not sure what the best direction would be. I could also look at a network device that would handle these tasks, but without switching out all of my cards to something else, I’d likely be looking at $650+ for a device that would handle at least four qsfp+ connections.