100GbE NIC recommendations? ConnectX-6, ConnectX-5, Intel E810

Looking for a dual Port 100GbE NIC preferably used i.e. ebay for under 500$. Fund a Intel E810 for 300$ ConnectX5 for 350$ ConnectX6 for 400$.
My Switch is a QNAP QSW-M7308R-4X
QNAP has there own 100GbE NIC they use Intels E810 chip so I’m leaning to the Intel NIC. Any reasons I shouldn’t?

I would personally go for the E810. Mellanox is good, but I’ve found that you really need to be using the OFED drivers if you want stable performance. The built-in kernel drivers just don’t cut it. We switched to E810 after seeing tons of port errors and PAUSE frames going on, and all of the issues went away. We’re just using the ice kernel driver that is bundled with Kernal 6.8 in Proxmox.

5 Likes

Ah yes exactly something like this, I was looking for. Thx so E810 it is

1 Like

Just make sure you have a good return policy on the NIC. I went with an E810 first but unfortunately it just would not bring a link up no matter what. I had to return it and replace with a ConnectX-6 which worked perfectly right away.

1 Like

Were you using an AOC cable? I’ve found that E810’s just don’t link up with most AOCs, but 100G-SR4 transceivers will work fine.

I tried 3 different transceivers (including official Intel one they list as compatible), two different fiber cables, and an AOC cable. I couldn’t use a DAC because of my 40ft distance. Tried adjusting all sorts of driver options including stuff related to FEC which is apparently part of the issue on the Intel NICs, but nothing ever made the link come up. I could see no errors reported on the card, driver installed just fine, and everything appeared to be working, but it would simply not bring a link up. I even verified on the NIC output of the transceiver that I could see a laser output, and verify it on the other end of the cable at the switch end, and verify the switch transmit laser being received on the NIC end so the optical communication was going and trying to work, it just wouldn’t get all final handshakes properly to officially bring it up as good.

I know people use the NICs fine, it just didn’t want to be compatible with my network gear apparently.

1 Like

Weird, yeah. We’ve been using primarily Arista branded 100G-SR4 transceivers in ours and have had 0 issues.

I had an initial issue with my 2x 25Gig E810 in Windows, which I “fixed” by copying working default settings from Linux over to Windows, IIRC something about offloading seemed to screw with Windows.

1 Like

Not looking to hijack the thread, but I have a similar dilema. I’m looking for recommendations on a 100g NIC for TrueNAS/FreeBSD. Based on my expierence Intel NICs are pretty much plug and play, but the reason I’m looking at the ConnectX-6 card is comes in AIOM form factor for my Supermicro Chasis. On my build I have the AIOM slot and one PCIE x16 slot available. If the reviews are possitive I’ll probably go with the ConnectX-6 simply based on form factor. I can use an Intel 100g NIC in my x16 slot, but then have to figure out a new config for my HBA. Anyone played with the ConnectX-6 on TrueNAS or FreeBSD

I’ve got a couple I picked up used. I was only able to get the Ethernet-only rather than Infiniband and Ethernet cards to work. I tried for quite a while but never could get them connected. I used the same transceiver and cable — worked fine in the ConnectX Dx but the ConnectX (in ether mode) would never show blinky lights.

Unless you actually want Infiniband, it is probably easier to just use Dx cards.

Not married to infinityband by any stretch. I should have clarified the card I was looking at was the Supermicro AOC-A100G-m2CM which uses the ConnectX=6 Dx controller.
Nic

So if I’m understanding you correctly should be G2G?

1 Like

Intel NICs are very picky about transceivers / AOC / DAC cables and I also had some issues with getting them to work with my DACs so I had to go with original Intel OEM transceivers, but when they do work then it just plows. With Mellanox cards I had really bad experience up to point where I had few machines which plain didn’t POST with Mellanox card installed at all (while other machines were totally fine with it). The same machines worked just fine with Intel X710-DA2 and DA4. So i’d say it’s hit or miss. I’d say it’s still better to have PITA transceivers choice than to have unbootable machines xD

1 Like

Well, my ConnexctX-6 Dx cards work fine (and on my Supermicro boards as well.) But my comment was less an endorsement and more, “if you’re going to go ConnectX, save yourself a headache and just get the Dx cards.”

I also have some Intel E810 cards (purchased new) and they worked right out of the box too.

Just remember that all 100gb NICs will be running 20+ watts, and while they have a larger heatsink they also are expecting server chassis airflow over them. In home environments this isnt usually the case, so just make sure the card isnt overheating. Many people often put an extra fan right near the NICs to get the airflow they need.

3 Likes

I just went on an adventure with a E810-CQDA2 that has two 100G ports. I found that even if it is PCIe 4.0 x16 (i.e. 256 Gbps available to the CPU) it will not allow for two 100G streams at full speed. For me and without much tuning, iperf3 caps out at ~120G total.

This was very disappointing to find.

I read in some forums that if you can do a x8+x8 bifurcation and use a E810-2CQDA2 then two devices will be enumerated and each one of them will sustain 100G each in parallel.

As for other brands, I have a ConnectX-5 Ex that has no issue sustaining 200G and I am looking into testing a Broadcom P2100G given the encouraging DPDK performance report.

Would be great to hear back on the eventual solution that was found.

I just setup a TrueNAS box from an Inspur NF5180M6. I used 2 x ConnectX-6 DX for the Inspur and the client system is a Lenovo 7DHE H200 Nvidia HGX server. I know 2nd gen Xeon scalable probably isn’t best but I am not maxing out the CPU or drive pool at all (10 x Micron 7450 Pro 2TB) as I was able to get 50GBps using FIO. The Lenovo system came with Brodcom NICs and it all auto negotiated to 100GB. I keep tapping our with iperf3 around 20-25Gbps and I feel like there are optimizations I am missing so I suppose I will look into DPDK.