I have a Windows PC and would like to up my game by going to a 25Gb NIC (single or dual) or higher speed. 25Gb is the most my switch can support at the moment, but if I ever get something higher, I can always bypass my switch and direct-connect to my NAS.
I’m honestly not sure which to buy or what to look for since this is a very enterprise market.
I prefer a card that works over at least PCIe 4.0 if possible, but I can do lower speed if that’s all that exists.
At this point, I don’t mind paying ~$500 or so for a card (even though that’s ridiculously expensive for a consumer NIC).
I have moved most of my data to my NAS on an all-SSD array. and I’m only connected over the motherboard’s built-in 2.5Gb NIC.
Currently, I only have an open-back 1x PCIe 4.0 slot free in a safe spot on my board.
There’s a 16x (8x speed) slot available right below my RTX 3090, but the card will cover some of its fans, and if I ever upgrade to a 4090 or higher, that slot will be gone. My other 16x (4x speed) slot currently has a graphic card in it for more display outs.
To get even 10Gb speeds at PCIe x1, I at least need a PCIe 4.0 card. That’s why I’m thinking a 25Gb card would come with PCIe 4.0. Previously, I looked for PCIe 4.0 10Gb NICs, but it didn’t seem ideal.
It’s not necessarily cheap, but it does exist. That means there’ve gotta be other cards like it. Even though it’s a 4-port card, I only need 1 port for now. That 1 port should have the full PCIe bandwidth available when plugged in.
Might be easier/cheaper to get a mainboard with more PCIe lanes at that point, then any SFP28 NIC from the last 10 years would be suitable. I go for the cheaper PCI-e 3.0 dual-port ConnectX-5’s myself.
You only get 16 gigabit from a Pcie 4.0 X1 slot, minus overhead you are probably closer to 14gbps Lan speed. You have to use a x4 slot if you want to do 25gb Lan.
When Nvidia bought Mellanox they didn’t transition all the resources very well, most are still either unavailable or very hard to find for anything ConnectX related. Nvidia only cares about their newest models, the stuff beyond the ConnectX series, and put in a bare minimum of work changing websites over from Mellanox to themselves in the transition. It’s very sad because Mellanox had the best driver support of any company I knew of, often being able to find drivers for newer model stuff all the way back to Win XP, and drivers for old EOL models all the way out to Win 10. Mellanox did this because their NICs are so heavily used in the financial industry for high speed trading, and sometimes those companies have a custom application that was built for XP and was not being replaced any time soon, but they always wanted faster hardware.
I’m on 2.5Gb right now, so even just 4Gb over 10Gb is a 40% gain over my entire connection speed.
My goal is the fastest connection available in this configuration until I can eventually remove that 2nd graphics card (requires buying an 8K2K display or swapping my side monitors to DisplayLink USB).
I was thinking of moving my RTX 3090 down a slot and putting this NIC above it, but that causes other logistical issues :(. 3-slot graphics cards suck!
thing is, I doubt that you will get a 25 GB switch in the next 10 years, which you can put next to your workspace without adjusting the fans, therefore directly connected without switch will be the permanent state.
Then, everything above Mella CX-3 is wasted money for a x570 Board if you wanna use the card just plain in Windows, provided the board has 4 PCIe lanes free.
I get 2,5GB/s via ISCSI (truenas) to my Windows VM with an CX3 pro IP-Mode 40Gb/s without SR-IOV just plain Linux Bridge and VirtIO Driver with a bit tuning.
I get it, if you wanna use RDMA or SR-IOV, sure get a CX-5/6 but also get a useable system for it.
I have thought about mikrotik before I bought my 10GB Switch, and every review I checked said even the 10GB version is not that silent.
In the end I bought an Zyxel 1930 and replaced the fan
PS: Ok, I watched the video, I didn’t know this model yet, but I still doubt that I could work next to it
yep, but the 10gb version has two psus, plus the fans, and cannot be powered via poe … remove the PSU cooling from the chassis, remove 8x 10Gbaset power conversion circuits … maybe there’s hope…
why work next to it? Why not stick it in a different room, rack, closet, garage? Its an SFP you can run fiber up to 1km without even needing sensitive optics.