You should check the datasheet for breakout cable support; there aren’t many cards that will support breaking out the 4*10GbE lanes on the NIC end. Breakout cables usually go into switches instead of NICs.
For one I know mellanox cards (CX3) don’t support breaking out the 4 lanes.
Yep, All 40gb cards are 4x 10gb links bonded topgether. That is how the 40gb spec works, and why 25gb was actually an upgrade to 40gb. The 25gb while having less bandwidth is a single link at 25gb and so clock rates went up and latency went down.
@tyrecies Remember that Infiniband is not Ethernet. Some IB cards can run in Eth mode, but if you are running Infiniband then you typically are not passing IP packets on it and you are required to run a controller somewhere to manage the network. Infiniband is normally for mounting and transferring storage data, like for SANs.
You could get one of the Chinese knockoff Intel EOL NICs if you want. $55 for a two port 10gb RJ45 NIC is really good deal. IDK how much I would trust something like that personally.
Typically the cheapest, high bandwidth NIC options are used Mellanox cards. I wouldnt go under a ConnectX-4 now days though due to driver availability under the new Nvidia ownership.
What about Broadcom BCM957508-P2100G, a lot cheaper then Mellanox, but already suspiciously cheap for a PCIe4 based card?
I’m looking for new NICs for my homelab myself.
I was planning on using 10GbE RJ45 over some CAT 6A. I will do fiber eventually, I just do not know enough yet. I knew I might have to use transceivers to adapt to RJ45 though.