Just-a-bunch-of-NICs 10gbE switching

A couple of weeks ago my pfSense router started rebooting on me every couple of minutes. A few months ago, it was rebooting on me maybe a couple times a day but I thought I solved that by re-seating a loose AC power connecter to the PSU. A few months before that, it had reset on me a couple times and after yanking the machine and testing it outside my rack, I wasn't able to reproduce the problem and thought it had just gone away. This time however, the problem wasn't going away.

I reinstalled pfSense in case there was something corrupted, with total disregard to all my VPN keys and years of RRD data, and there was no change: reboots, errors, and what I thought had evolved to immediate hard lockups on boot, as I wasn't able to get any response from key presses on the terminal.

Obviously my hardware was failing and had gotten to the point where it needed fixing. "No big deal," I thought, "I'll just swap out all the hardware." So I grab the motherboard, RAM and CPU from an old GPU-mining machine that hasn't been powered on for probably a year now.

Seeing a few x16 PCI-E slots on this replacement hardware I was going to be using I figured I may as well load them up with some 10gbE cards I have sitting around since I currently only have enough 10gb ports on my cheap Quanta switches to get that speed between my main desktop and file server. I load up FreeBSD 10.1, download and compile the driver for my 10gbE cards and scp the .ko files over to my new pfSense box.

Once I load the kernel modules, I'm able to ping my router and communicate with machines across the bridged LAN, but I can't access SSH/HTTP on the router. At the same time, neither my USB or PS/2 (using an adapter, as motherboard had no on-board PS/2 ports) variety of KVM-over-Cat5 adapters are working once the FreeBSD kernel starts up. Part of what I thought was hard lockups before from the keyboard not responding turned out to be new behavior with my KVM adapters after pfSense went from the FreeBSD 8.3 to 10.1 kernel. It would probably take me half a day to dig through all my boxes of equipment and find a reliable USB 1.1 hub to see if that would fix the KVM issues. Good, plain, new USB 1.1 hubs are even hard to find on Amazon, Newegg or eBay these days.

Having no remote management either through the LAN interface or my network KVM and therefore having to configure pfSense physically at the machine with its own dedicated keyboard by editing the xml in 80x25 text mode wasn't a solution I was keen to accept. I signed up for git access for pfSense so I could get a proper pfSense build environment and try and compile the NIC drivers against a kernel with all the proper settings such as pf, altq, ipfw, etc... but while I was doing that, I had already started ordering replacement hardware with even more PCI-E x16 slots. Along with it, I got a low end LGA 1150 CPU, a G3240. "More than enough for a router," I figured.

So after installing FreeBSD, compiling ports of essentials like nano, bash and git, I never actually ended up checking out a branch. The hardware arrived and it had all sorts of bells and whistles which I figured would go wasted if I used it for just a router, so I figured I'd load it up with RAM and use pfSense on top of Hyper-V.

So while waiting for the new RAM to arrive, I benched up an install of Windows and pfSense in Hyper-V and thought I'd have everything ready to go and could just throw the hardware in my 4U router case, dd the install over to the router's SSD, and everything would work find. This was not the case though. Besides the machine not even passing POST with the USB KVM adapters plugged in, none of my NICs seemed to be working. Luckily, part of the problem was my ISP ignoring new DHCP lease requests since my old one wasn't released gracefully and eventually I was able to get a response on one of the Intel 1000PT PCI-E x1 card I had for the WAN interface, but I still wasn't getting any response from the 10gbE cards for the LAN side.

My solution? Create an internal virtual network adapter for the pfSense VM and bridge that with the 10gbE adapters in the host OS. I figure that shouldn't be too different of a final configuration, as I wasn't planning on having separate rules for the different LAN interfaces in pfSense anyways. Big mistake.

I break out ntttcp between my main workstation and file server, and instead of getting the full 10 gb/s I was seeing before through just some Quanta switches, I was only getting between 1 and 3 gb/s and hitting about 77% kernel time CPU usage on the lowly G3240.

I'll probably be doing a bunch of testing over the next few days after I restore my network to it's previous state but I was curious; has anyone here setup a virtual switch with a bunch of 10gbE NICs and if so, what kind of throughput are you seeing and what OS/CPU are you using? I know Wendel recently mentioned possibly doing that instead of dropping a couple thousand dollars on 10gbE switching hardware.

Ended up finding 20port 10GbE switches on ebay < $500

But it appears it was mostly a windows bridging performance issue

ntttcp -s -m 4,*,xxx.xxx.xxx.xxx -a 32 -l 64k -t 20 from server to workstation

Single 20s tests, windows bridge was repeated several times and slow result was reproducable.
bridging PC intel G3240 on Z87 with two Mellanox ConnectX2 adapters running at PCI-E 2.0 x8
Under windows, allowed Mellanox drivers to tune system for forwarding performance

1077.469 MB/s - Through two Quanta LB4M switches
198.040 MB/s - Trhough two Quanta LB4M switches + Windows 7 x64
828.753 MB/s - Through two Quanta LB4M switches + Linux 3.14.35 amd64
1008.440 MB/s - Through two Quanta LB4M switches + FreeBSD 10.1 amd64
1002.933 MB/s - Through two Quanta LB4M switches + XG2000R switch

I'm considering buying a Quanta LB4M switch and I found conflicting information regarding it's ipv6 support. Does that actually work?
Also I found some blog about a routing version of the firmware which might support it.
https://blog.yolocation.pro/index.php/2017/02/03/how-to-upgrade-quanta-lb4m-to-routing-firmware/
It would me out a great deal, if you could confirm the ipv6 capability, since the switch is very cheap on ebay.

i shall look into that for you.

Of course if others want to chime in they can. Especially since OP is presumably gone, someone else might have the hardware and can confirm.

As you said, those things are very cheap on ebay.


As described in the manual, the switch does support ipv6 routing and tunneling.

I have found a user that did have problems with ipv6 not working. The user was also unable to view the web interface.
The user that was experiencing probelms was koszik and he states ipv6 problems in post #25

That forum linked above is an active forum of people that actually use the switch. Like literally people are still commenting on the thread and it is 3 years old. The last post was today.

Of course in order to comment and ask questions you have to create an account.

I still have fastpath switching firmware on all my quanta LB4Ms. I'll try that routing firmware image linked.

What ipv6 functionality would you like tested?

Welcome back OP.

Apparently there is a 970 page manual, which describes layer 3 features and ipv6 capabilities but users in the same thread have reported that it is about 50% inaccurate and therefore basically useless.
Maybe that manual corresponds to the switching firmware I linked.

I am interested in general switching and QoS.