A couple of weeks ago my pfSense router started rebooting on me every couple of minutes. A few months ago, it was rebooting on me maybe a couple times a day but I thought I solved that by re-seating a loose AC power connecter to the PSU. A few months before that, it had reset on me a couple times and after yanking the machine and testing it outside my rack, I wasn't able to reproduce the problem and thought it had just gone away. This time however, the problem wasn't going away.
I reinstalled pfSense in case there was something corrupted, with total disregard to all my VPN keys and years of RRD data, and there was no change: reboots, errors, and what I thought had evolved to immediate hard lockups on boot, as I wasn't able to get any response from key presses on the terminal.
Obviously my hardware was failing and had gotten to the point where it needed fixing. "No big deal," I thought, "I'll just swap out all the hardware." So I grab the motherboard, RAM and CPU from an old GPU-mining machine that hasn't been powered on for probably a year now.
Seeing a few x16 PCI-E slots on this replacement hardware I was going to be using I figured I may as well load them up with some 10gbE cards I have sitting around since I currently only have enough 10gb ports on my cheap Quanta switches to get that speed between my main desktop and file server. I load up FreeBSD 10.1, download and compile the driver for my 10gbE cards and scp the .ko files over to my new pfSense box.
Once I load the kernel modules, I'm able to ping my router and communicate with machines across the bridged LAN, but I can't access SSH/HTTP on the router. At the same time, neither my USB or PS/2 (using an adapter, as motherboard had no on-board PS/2 ports) variety of KVM-over-Cat5 adapters are working once the FreeBSD kernel starts up. Part of what I thought was hard lockups before from the keyboard not responding turned out to be new behavior with my KVM adapters after pfSense went from the FreeBSD 8.3 to 10.1 kernel. It would probably take me half a day to dig through all my boxes of equipment and find a reliable USB 1.1 hub to see if that would fix the KVM issues. Good, plain, new USB 1.1 hubs are even hard to find on Amazon, Newegg or eBay these days.
Having no remote management either through the LAN interface or my network KVM and therefore having to configure pfSense physically at the machine with its own dedicated keyboard by editing the xml in 80x25 text mode wasn't a solution I was keen to accept. I signed up for git access for pfSense so I could get a proper pfSense build environment and try and compile the NIC drivers against a kernel with all the proper settings such as pf, altq, ipfw, etc... but while I was doing that, I had already started ordering replacement hardware with even more PCI-E x16 slots. Along with it, I got a low end LGA 1150 CPU, a G3240. "More than enough for a router," I figured.
So after installing FreeBSD, compiling ports of essentials like nano, bash and git, I never actually ended up checking out a branch. The hardware arrived and it had all sorts of bells and whistles which I figured would go wasted if I used it for just a router, so I figured I'd load it up with RAM and use pfSense on top of Hyper-V.
So while waiting for the new RAM to arrive, I benched up an install of Windows and pfSense in Hyper-V and thought I'd have everything ready to go and could just throw the hardware in my 4U router case, dd the install over to the router's SSD, and everything would work find. This was not the case though. Besides the machine not even passing POST with the USB KVM adapters plugged in, none of my NICs seemed to be working. Luckily, part of the problem was my ISP ignoring new DHCP lease requests since my old one wasn't released gracefully and eventually I was able to get a response on one of the Intel 1000PT PCI-E x1 card I had for the WAN interface, but I still wasn't getting any response from the 10gbE cards for the LAN side.
My solution? Create an internal virtual network adapter for the pfSense VM and bridge that with the 10gbE adapters in the host OS. I figure that shouldn't be too different of a final configuration, as I wasn't planning on having separate rules for the different LAN interfaces in pfSense anyways. Big mistake.
I break out ntttcp between my main workstation and file server, and instead of getting the full 10 gb/s I was seeing before through just some Quanta switches, I was only getting between 1 and 3 gb/s and hitting about 77% kernel time CPU usage on the lowly G3240.
I'll probably be doing a bunch of testing over the next few days after I restore my network to it's previous state but I was curious; has anyone here setup a virtual switch with a bunch of 10gbE NICs and if so, what kind of throughput are you seeing and what OS/CPU are you using? I know Wendel recently mentioned possibly doing that instead of dropping a couple thousand dollars on 10gbE switching hardware.