Good way to get 10Gb Ethernet for Windows?

I have a 10Gb Ethernet adapter on my new SSD NAS, and my main rig, where I store most of my data, only has a 2.5Gb NIC.

I want to get a 10Gb NIC compatible with Windows 11.

Do you have any suggestions on what I should use? I really want backups to be faster as the limiter on my NAS is the 2.5Gb link from my PC. I don’t do any sort of editing or data loading from the NAS in Windows.

Options

  1. I have a free PCIe x16 slot, but it sits right under my RTX 3090. If I upgraded to an RTX 4090 sized card, it wouldn’t fit.
  2. Another idea is using the PCIe x1 slot which shouldn’t block the graphics card’s fans that badly. Because it’s PCIe 4.0, that’s 16Gb of bandwidth provided a 10Gb NIC exists in that format.
  3. Lastly, I thought about getting a USB adapter. I have a 20Gb USB-C port as well as 10Gb USB-A ports. There’s no Thunderbolt, but I could do Thunderbolt via a PCIe slot if I really wanted.

PCIe 3.0 is 8GT/s. Meaning the bandwidth per lane is approximately 985 MegaBytes (not MegaBits) per second.

So, you really only need a PCIe 3.0 x4 slot to fully utilize a 10Gb NIC.

PCIe 4.0 is 16GT/s; exactly double, so you would only need a 2x slot on PCIe 4.0.

PCI Express bandwidth is measured in giga transfers, which do not directly translate, due to overhead. PCIe 3.0 and 4.0 both have use a 128b/130b encoding scheme, which accounts for the overhead. All this to say, it’s not a PCIe 3.0 2:1 or PCIe 4.0 1:1 translation when it comes to a 10G nic. Not to mention that the nic will have control overhead itself.

Do you have a supported motherboard?


I don’t currently have any 10Gb hardware, so I can’t speak to any individual consumer product; though I do know that the Intel X540-T2 10G nic was the bees knees back when I was managing datacenter hardware. (it’s old though, ~2012)

Update: I stand corrected, there are PCIE 4.0 10gbe adapters now, apparently. Gigabyte seems to have a 4x adapter.

There are not really available PCIE4.0 10gbe adapters afaik. 3.0 ones are expensive. 2.0 ones are cheap.
X540-T2 can be had for fairly cheapish on ebay or aliexpress, and is probably your best bet for a problem-free experience.

PCIE3.0 4x should be sufficient for 2x 10gbe. 1x 4.0 should be enough for 1x 10gbe. there just aren’t any adapters for that, because consumer market is a bit crap when it comes to practical add in cards that aren’t graphics cards, and datacenters care more about higher bandwidth cards, or cards with more ports, rather than filling 1x slots that don’t exist in server boards.
It’s sad, I know, I want one too.

2 Likes

My thinking is I can get a PCIe 4.0 card put it in my 1x slot. I can run some speed tests to see if I still get the full 10Gb bandwidth through a single connection.

Maybe someone else has already tried?

It should be plenty for a single 10g port.

1 Like

I can’t seem to find any 10Gb PCIe 4.0 cards though. Do you know of any?

I saw a gigabyte card listed as 4.0, but I think the listing was a lie. Elsewhere says it’s 3.0.
So, it looks like there aren’t any 4.0 10Gbe cards, and certainly no 1x ones.
In theory, you should be able to get 80% of the bandwidth with a 3.0 card though.

Looks like there is an option:

I made no attempt to check for Win11 support/drivers.

I’m using a TP-LINK TX401 10GbE NIC and PCIe 3.0 x4 is just fine. Getting full 10Gbit in both directions without problems. It’s a retail card with Marvell AQ107 chip. Runs on everything. Lists Windows 10 + Windows Server + Linux as supported on the box.
It was plug&plug under Linux, had the NIC plug in and running iperf3 within 2 minutes. Full bandwidth via a 5m Cat6a cable full duplex.

Can definitely recommend this consumer NIC.

I would advice against getting Aquantia based NICs, they’re essentially dead with little to no support by Marvell. If you want 10GBe just go with anything based on Intel, Broadcom, Chelsio.

I think the story is if you’re getting an Aquantia card, you want to make sure you get at least a AQ107S – the plain AQ107 is obsolete/deprecated for support.
Is Intel still developing network gear, or is it all just legacy gear now? They have like 520/540/550/710 offerings – I’m not sure which is which, but I remember hearing to avoid one of those for some non-existent feature reason.
I have a Chelsio T6225-SO-CR that I bought for my Macbook Pro – maybe Chelsio’s Windows support is better, but the latest MacOS drivers they offer are for Mojave.
Mellanox/Nvidia cards haven’t been mentioned. I think the ConnectX-3 were 10G. I have some of the ConnectX-5, 25G, cards (which are backwards compatible with 10G) and they’ve been well supported in Windows, Linux, and VMware.

If you can do Thunderbolt on your workstation AND NAS, you might be able to do a high speed backup via Thunderbolt Networking, which just needs the Thunderbolt connection alone. I’ve made that work with Mac Time Machine backups. There may be some issues with Windows:
Thunderbolt Networking is Broken in Later Versions of Windows 10

Nice find. OWC always has unique stuff. That AQC113 card looks to be the right chipset to get 10GbE on a PCIe4 x1 link.

Regarding end of life of Aquantia NICs, Marvell is supporting them fairly well
and is coming out with new AQC chipsets; They have been rebranded to “mGig” and serve the prosumer part of Marvell’s networking product stack, but their chipset names all still start with AQ.

The Marvel AQC113, 114, and 115 chips are indeed capable of PCIE 4 x1 interface use according to Marvel themselves:

Your 1x slot will need to be open ended though to accept a 4x card into it even though only 1 lane will get used.

The current Marvell 10 GbE “FastLinQ” ethernet adapters are indeed PCIe Gen4 x2. An example from an ASUS motherboard:

You could also use the low profile adapter and a 1x->open riser cable or card, since 90% of 1x slots aren’t open for some reason. You lose some stability, though, since the bracket won’t lock in.

There’s also checksums and addresses and so on with each transfer op.

And not every transfer transfers user data - there’s a bit of signaling on top of data queues, interrupts when space in the buffers opens up and driver can write more stuff, there’s some hardware maintenance stuff and so on.

Increasing PCIe transaction size (maximum payload size) all the way up to 4k (which is 4kilobytes per transfer) - if you have the setting in bios/uefi/firmware/drivers. It’s like jumbo frames for PCIe (although still tiny, it helps a lot).


Rule of thumb for network cards and PCIe, 20-30% is overhead.

My 1x slot is open-ended.

1 Like

How do I set the PCIe payload size? Where would that typically be?

The AQC113C is Marvell right? Any issues there?

Is the issue that consumer cards aren’t PCIe 4.0 or is it that most good 10Gb cards don’t have drivers because the server world has turned to 25Gb+?

I’m trying to figure out why there’s this huge hole in the market for 10Gb NICs.

Yeah the AQC113C is Marvel now, but its a continuation of Aquantia’s product IP. It has current drivers for the usual suspect OSs; Where it is missing drivers is for ESXi and some of the other type 1 hypervisors.

Incase you missed it buried in that reddit post, this is essentially what you’re looking for:

1 Like