The usual noob question(s) about cheap 10gbe again

Hello there thinking about a simple 2 computer 10g point to point link. These are both linux machines running modern 5.2+ kernel, ubuntu 19.10. Each computer has one free pci-e 3.0 x4 slot. Which I understand can also take x8 cards if needed (with some slight modification, that I don’t mind).

IDK what I am doing. and only the last choice seem like (to me) something I can reliably count on as working. Currently, my 4 main choices seems to be one of:

A pair of these single port cards

2x https://www.ebay.co.uk/itm/192981457274
+1 cable https://www.ebay.co.uk/itm/153707777351

= £73.60

  • OR

A pair of these dual port cards

2x https://www.ebay.co.uk/itm/192857493463
+1 cable https://www.ebay.co.uk/itm/153707777351

= £99.95

  • OR

Dual link is possible? for 20GBe teamed ?

2x https://www.ebay.co.uk/itm/192857493463
+2 cables https://www.ebay.co.uk/itm/153707777351

= £123.95

  • OR

the AQN RJ45 modern Aquantia cards

2x https://www.aliexpress.com/item/33036697781.html

with existing RJ45 cat6 cabling that I already have ( which is 1000mhz, 10gb capable)

= £129.45 total

However is there anything else I should be considering?

Another question:

How can I be sure that the SFP+ cable is going to be compatible / work with those cards? It says it’s a used CISCO cable. Therefore should I be buying a MELLANOX branded SFP+ cable instead? Or am I just being paranoid?

Any help appreciated with links to specific product(s). Either on aliexpress or for UK / ebay. Many thanks.

I don’ have any experience with the sfp+ mellanox gear, just with the QSFP CX2 and CX3 cards.
Mellanox doesn’t seem picky with transcievers as far as I have pushed it, or heard.

One big Tip I can give you is to have a look at the part numbers for OEM rebranded cards, since some shops only list those cards under the part number and with a pretty good price.

HP544, watch out, FLR is flexlom and not pcie compatible. So no FLR cards please.

My current setup consists of my WS and a NAS directly connected with one Mellanox QDR DAC cable and two HP544 Cards that can do IB and ETH (VPI), set to 40GbE.

Iperf did like 36Gbit which is good considering it is just a QDR cable.
Could get up to 56GbE with FDR14 Cables.

But those are things to talk about later once you have questions into that direction.

2 Likes

Might aswell throw a Mikrotik CRS309-1G-8S+IN into the mix.

Why?

OK… one thing to bear in mind with teaming is that it may not work very well point to point (i.e., only two machines involved).

Multiple NIC Teaming (via LACP, etc.) generally load balances connections via mac address pair
assignment to a specific NIC, or some other similar method. What this means is that from machine A to machine B you can’t generally expect 20 GB out of a team (assuming you’re using 2 NICs on each end). If you do or can manage it, generally not for a single connection. Maybe linux can do something to get around that. But that’s generally how it works.

Works for proper servers OK as you typically have many clients (and thus many MAC-MAC pairs to balance across the multiple NICs, and its also good for redundancy in case a cable breaks or gets disconnected, etc. But home? More trouble/expense than its worth imho.

SFP wise, in my (admittedly, limited brand) experience only CISCO is dick-ish with requiring genuine Cisco branded SFPs for their equipment and even then it can normally be over-ridden.

I’d go for the pair of single port cards and take a punt.

If its point to point within a room (for example at home from a workstaiton to a NAS) even if your supplied SFPs won’t work you can generally get cheap SFPs that probably will (either multi-mode short haul or a copper cable).

Depends how many ports he needs/wants.

They’re pretty cheap and some software/hardware does annoying stuff (e.g., alarm, not renegotiate the network properly, blah, blah) when it drops link (e.g., you reboot the other end). If both ends are plugged into a switch that stays up, that (potential) shit goes away.

Also future expansion to more machines. But yeah, definitely starts getting more expensive when you add the switch.

1 Like

It is super cheap for a SFP+ switch compared to Dell, HPE or Cisco offerings. Another benefit is that MT often has a fanless mode or is passively cooled, if the benefit of passive are not obvious, you have not heared 1u Switches in action.
Also what @thro said.

A cheaper option when you want to stick to single SFP+ between machines is the Mikrotik CRS305-1G-4S+IN, 4x SFP+ & 1x RJ45 gigabit Switch/Router. $150 MSRP

Was just an idea. When all you got is two machines, ignore it.


A friend of mine has Intel X520 cards (single SFP+) between his gaming rig and server. IIRC there was some trickery involved to get his machines to use that instead of RJ45.

Playing with this kind of gear definetly is fun, question is how hard you want to run face first into the learnaing curve.

I use both Mellanox Connect-X3 and Intel X520 (Fujitsu DA2755).

I have the single SFP+ Mellanox cards and the Dual port Intel cards.

The Mellanox card works with anything. The Intel is having trouble either with my Fiberstore SFP+ “Cisco” module or my Mikrotik CSS326. I ordered some Intel SFP+ modules to test.

The Mellanox drivers and software are a bit “iffy”. There are two drivers, an open source one and a Mellanox one. The Mellanox is more feature-rich but I had trouble installing it or compiling it in some distros. Messing with the card’s firmware and UEFI (updating, deleting etc) is a bit of a pain.

The Intel experience is much more streamlined. I just use the Inbox kernel driver and everything works fine. I got the Intel card because I wanted SR-IOV for my Windows VM. The Mellanox open source kernel driver worked with SR-IOV in Linux VMs but not with Windows.

Try to avoid 10Gb QSFP cards as the cabling is probably expensive or go with the 40/56Gb Mellanox QSFP cards if you want speed and you don’t need a switch.

2 Likes

Hey guys thanks for all the tips and comments!!!

Sorry I didn’t see any of them earlier, just because I have some notifications problem / issue with my account here on L1Techs Forum. So never actually end up receive and getting any notification emails to my gmail inbox whenever people reply on the thread.

BTW something else I also did was to ask the same questions over on servethehome forums. Which is basically an identical thread to this one but of course with different people commenting their answer(s) over there too.

In fact there was some last question I asked there about the 40/56 GBe mode of these mellanox Connect-x 3. Since the who suggested them reported only a in that mode 900mbps speeds. Was just asking if that was indeed the real upper limit there or not. Since that is effectively about 10GBe speed, despite the link status reporting 56GBe or whatever. And the other dude over there never came back me to clarify that in the end.

Perhaps there is some nuance(s) / gotchas about that and also regarding the exact models / cables and/or drivers (for example windows driver vs linux driver). Well anyhow I have no idea yet what is up with that particular aspect. Just asking here too in case anybody here knows something more.

Anyhow… Promise to come back very soon and weight up all of this suggestions thoroughly. Instead of just glancing over today. Can already notice that you you have left some really valuable comments here! Thank you so much everybody. Will digest / consume :yum:

LACP between two machines will not give you any speed over that of a single link, it only works with multiple machines. But on Linux you can use round robin which will essentially stripe the traffic (like raid0) and give you more speed. It might also be slower because you’re likely to get traffic collisions and packets arriving out of order. So I wouldn’t bother if I were you unless you can get it cheap.

1 Like

Yeah that’s what i was getting at.

LACP is (i think?) mac address balanced however, so if you’re using it for virtual machine traffic (i.e., many MACs on a specific port channel) then you may actually get a benefit.

I figured linux may have some other hackery to get a benefit in other circumstances, cheers.

1 Like

It can be configured as either mac or IP (or IP + port in Linux). Mac will only do fail over as it doesn’t use the source and destination mac addresses but only the current link which will always be the same no matter the end point. IP allows for load balancing. So if you have a bunch of VMs sharing a bonded connection then it I’ll use the extra bandwidth (this is how my server is set up) but if its only two hosts it will only be one link speed. Round Robin just sends the traffic down all the ports at the same time and it’s up to the reciever to figure it out, so sometimes it can improve performance but other times it can harm performance. Its generally used on hardware which doesn’t support any link aggregation protocols or in a direct host to host connection.