Is 100 GbE networking really that cheap?

I’m trying to understand why refurbished 100 GbE and 40 GbE hardware appears to be so unbelievably cheap.
Am I missing something?

Could I actually connect two computers with a 100GbE link using:
2x this controller for 40 bucks
2x this transceiver for 30 bucks
1x this cable for 9 bucks

Or how about this managed 36-Port 40 GbE switch? How can this only be 99 bucks?

  1. That controller is omni-path. Doesn’t work for regular ethernet. basically e-waste.
  2. Those intel tranceivers were cast off from datacentres awhile back for pennies on the dollar. Runs hot, and allegedly have higher than expected failure rates. Take that as you will. Actually those go for 5 bucks in the US. Relevant STH thread.
  3. fibre optic patch cables aren’t super expensive
  4. That switch only does infiniband from a cursory google search. Might also do ethernet, but i haven’t found anything supporting that, so probably infiniband only.

TL; DR Doesn’t work for ethernet.

2 Likes

two of those after flashing to ethernet will do.

for switches calculate about 500 € / $ with a bit of sourcing.

cables are affordable

1 Like

I see. So what would it realistically cost to get a 100 or 40 GbE link between two Linux machines?
Also, what even is “Ethernet”? And do we actually need it? I mean I can connect two devices using WiFi (so without Ethernet) and I can still do all the typical networking stuff like pinging, sending TCP packets etc. If there are cards that use something else as the low level protocol, it shouldn’t really matter to me, no?

350€/$ 2x 100GbE NIC and a 30 €$ Cable. (direct connection)

Wlan is wireless ethernet, it is the Typical Network “idea”, infiniband is an solution for fast interconnecting Drivepools and very specific.

i run 100gbe between my Unraid und main PC over an Mikrotik switch

i bought

Mellanox CX4 VPI EDR IB Single Port QSFP28 PCIe 3.0 100G NIC Adapter High & Low | eBay <— 2x
MikroTik CRS504-4XQ-IN 500€ used

DAC cables 30€ each

now on its way is this one Dell Networking S4112F-ON 12x 10GbE SFP+ + 3x 100GbE i bought it for 500€ to over ebay

1 Like

I see. So I’d need two of these Mellanox cards and a cable, but don’t I also need a QSFP28 transceiver for each card?

And how about 40 GbE? Can you give recommendations on that as well?

depends on the real distance between those 2 Computer :smiley: if you need to run a far fibercable then Yes sure you need two transceiver but if they are quite in the same “rack” just buy an DAC cable.

would not bother its more like 4x10gbe bondet and an dead end, you would have more use with 28GbE speeds like those cards Mellanox CX4121A IBM 01GR253 Dual-Port SFP28 PCIe x8 3.0 25GbE Adapter FP | eBay
they use the same qsfp28 connectors that use modern Cards. have one of those in my Unraid ZFS media server.

MCX354A-FCBT is around 20-30 bucks each, DAC cables shouldn’t be super expensive for an obsolete standard. shop around.

Won’t recommend a connectx-3 card at this juncture though, you can run into things like SR-IOV not working on certain firmware.

What’s your use case for 40G/100G? You’d want to look at RDMA for those kind of data rate, especially 100G.

25G is plenty and actually pretty affordable. MCX4121A-ACAT are under 100 bucks.

1 Like

40GbE stuff and true

true dont bother

when direct connecting it is usable they will do rdma, only a problem with Mikrotik switches they dont use RDMA

yes true, it is quite hard to use 100GbE on its fullest :smiley:

I’m trying to understand why refurbished 100 GbE and 40 GbE hardware appears to be so unbelievably cheap.

It is cheap because the industries that use the highest bandwidth products have all moved on to 200gb+ speeds. These generations have been moving fast now, they are currently buying up 800gb products and in 2-3 years they will all move on to 1.6tb products to keep pace with advancing AI and financial markets.

Most 100gb Ethernet products are like 40gb Ethernet, it is 4 channels bonded together. 40gb is 4x 10gb, and 100gb is 4x 25gb. Cheap 100gb is like this, such as from Mikrotik. Nothing wrong with it as it works fine. There is some “true 100gb” that is a single channel of 100gb Ethernet, and its main purpose is not to run as 100gb, but run as multiple channels bonded together. That’s how you get 400gb Ethernet actually (though sometimes its 8x 50gb).
You really have to do some research in these higher bandwidth products, as what you get is all over the place. For instance, with 800gb Ethernet you sometimes have 8x 100gb links, sometimes 2x 400gb links, sometimes 4x 200gb links (extremely rare). The official spec is the 8x 100gb one, but even the Ethernet Consortium acknowledges that they dont care how the 800gb is made up, and people could even do 16x 50gb links though no one has so far to my knowledge. Think of modern, high bandwidth Ethernet more like a PCI-E interface and how we use 16x lanes for GPUs.

Just look at all the official specs for ways to do high bandwidth Ethernet :man_facepalming:

4 Likes

Yes

Curious to know what your unrated setup is, I’m currently using trueNas but it’s been annoying me lately especially with urbackup

I use ZFS for Backups and Snapshots and a simple Directory sync Docker for everthing else and an Synology 20TB for an extra copy of “home mission critical Data like Family Pictures and Documents, Windows/MacOS images” all that data lives on Unraid 1 to.

Unraid 1 (Mainly Docker/Plex/windows Vms/Emulators/backup)
13th Gen Intel® Core™ i7-13700K @ 5346 MHz
MPG Z690 FORCE WIFI
128GB DDR5
2TB NVME
64TB HDDs 4x16TB in an ZFS array
dual 25Gbe Mellanox NIC
IGPU SRIOV 7 Devices

Unraid 2 ( Mainly AI/Steam/game/ISCSI/fast Storage)
AMD EPYC 7262 8-Core @ 3200 MHz
ASRockRack ROMED4ID-2T
64GB DDR4
24TB NVME 3x 8TB Kioxia CD6 zfs array
1TB Samsung for Docker/VM
RTX a4500 20GB VRam
100Gbe Mellanox NIC

MainPC 13900k
Asrock Taichi z790
48GB DDR5 @7000mts
MSI 4090
1TB Samsung 990
100GbE Mellanox NIC
all connected with my new Dell Networking S4112F-ON 12x 10GbE SFP+ + 3x 100GbE
on all devices is RDMA activated but the system is Mainly limited by 3GB/S

1 Like