That controller is omni-path. Doesn’t work for regular ethernet. basically e-waste.
Those intel tranceivers were cast off from datacentres awhile back for pennies on the dollar. Runs hot, and allegedly have higher than expected failure rates. Take that as you will. Actually those go for 5 bucks in the US. Relevant STH thread.
fibre optic patch cables aren’t super expensive
That switch only does infiniband from a cursory google search. Might also do ethernet, but i haven’t found anything supporting that, so probably infiniband only.
I see. So what would it realistically cost to get a 100 or 40 GbE link between two Linux machines?
Also, what even is “Ethernet”? And do we actually need it? I mean I can connect two devices using WiFi (so without Ethernet) and I can still do all the typical networking stuff like pinging, sending TCP packets etc. If there are cards that use something else as the low level protocol, it shouldn’t really matter to me, no?
depends on the real distance between those 2 Computer if you need to run a far fibercable then Yes sure you need two transceiver but if they are quite in the same “rack” just buy an DAC cable.
I’m trying to understand why refurbished 100 GbE and 40 GbE hardware appears to be so unbelievably cheap.
It is cheap because the industries that use the highest bandwidth products have all moved on to 200gb+ speeds. These generations have been moving fast now, they are currently buying up 800gb products and in 2-3 years they will all move on to 1.6tb products to keep pace with advancing AI and financial markets.
Most 100gb Ethernet products are like 40gb Ethernet, it is 4 channels bonded together. 40gb is 4x 10gb, and 100gb is 4x 25gb. Cheap 100gb is like this, such as from Mikrotik. Nothing wrong with it as it works fine. There is some “true 100gb” that is a single channel of 100gb Ethernet, and its main purpose is not to run as 100gb, but run as multiple channels bonded together. That’s how you get 400gb Ethernet actually (though sometimes its 8x 50gb).
You really have to do some research in these higher bandwidth products, as what you get is all over the place. For instance, with 800gb Ethernet you sometimes have 8x 100gb links, sometimes 2x 400gb links, sometimes 4x 200gb links (extremely rare). The official spec is the 8x 100gb one, but even the Ethernet Consortium acknowledges that they dont care how the 800gb is made up, and people could even do 16x 50gb links though no one has so far to my knowledge. Think of modern, high bandwidth Ethernet more like a PCI-E interface and how we use 16x lanes for GPUs.
Just look at all the official specs for ways to do high bandwidth Ethernet
I use ZFS for Backups and Snapshots and a simple Directory sync Docker for everthing else and an Synology 20TB for an extra copy of “home mission critical Data like Family Pictures and Documents, Windows/MacOS images” all that data lives on Unraid 1 to.
Unraid 1 (Mainly Docker/Plex/windows Vms/Emulators/backup)
13th Gen Intel® Core™ i7-13700K @ 5346 MHz
MPG Z690 FORCE WIFI
128GB DDR5
2TB NVME
64TB HDDs 4x16TB in an ZFS array
dual 25Gbe Mellanox NIC
IGPU SRIOV 7 Devices
MainPC 13900k
Asrock Taichi z790
48GB DDR5 @7000mts
MSI 4090
1TB Samsung 990
100GbE Mellanox NIC
all connected with my new Dell Networking S4112F-ON 12x 10GbE SFP+ + 3x 100GbE
on all devices is RDMA activated but the system is Mainly limited by 3GB/S