10Gb NIC/Windows 10 buffering issue

Hey all! I recently attempted to get a 10gb NIC & Switch set up between my main PC and unraid server and started to run into a weird speed issue that is causing me to want to start to pull my hair out. Before getting into the nitty gritty here are the specs for both my main PC and my server.

Main PC:
Windows 10 Pro
Asus Z790 Hero
i9-13900k
64gb G.Skill Trident Z5 DDR5-6400 (F5-6400J3239G16GX2-TZ5RK)
HPE 10GB 561T
C: drive - Seagate Firecuda 530 2TB
D: drive - (Dynamic drive for 4TB minus headroom) SK Hynix P41 & Samsung 980 Pro

Server:
Asrock Z690 Pro RS
i5-12400
32gb TeamGroup 3200mhz
HPE 10GB 561T
cache drive - Samsung 980 Pro 2TB

When uploading from PC to Unraid I was getting approx. 220 Mb/s but when moving a single large file from Unraid to PC it was the expected 1.05Gb/s.

Based on recs from unraid’s forum & reddit I ran iperf and got these two results: Result 1 & Result 2.

Based on recs from the above I also went into the network controller and disabled the following: Interrupt moderation/rate; IPV4 checksum offload; TCP and UDP checksum offload for IPv4 & IPv6; and enabled Jumbo Packet, set Transmit Buffers to 16384 (max) and Receive buffers to 4096 (max). Doing this did increase the transfer speed to approx. 425Mb/s.

I eliminated the bottleneck being the switch (direct connection between PC and server and received the same results) and pretty much any other hardware related bottleneck I can think of (changed ports & plugged NVME thunderbolt drive in) and got the same results.

From my observations it definitely looks like a buffering issue but I’ve already maxed out the transmit buffer and when I’ve edited the value in the registry the transfer speed doesn’t change.

If anybody has any recommendations my wife would thank you for saving what little hair I have left.

2 Likes

pastebin link
someone has been round the interwebs for a while…

Have you confirmed both NICs have auto negotiate disabled?
We just had a 10gig setup that would only negotiate to 100/10 unless manually locked to base 10g

have you confirmed no other bottlenecks exist (able to transfer 1+GB/sec locally)?

How’s your hardware utilization during transfers?

Lol yeah, haven’t seen that mentioned in like 10 years and then someone requested the iperf results posted in there. Didn’t even know the site was still up.

Reg. auto negotiate - Yes I did. I also tried moving a file onto a thunderbolt 4 nvme drive and still got the same result.

As best I can figure all bottlenecks have been accounted for and the hardware utilization is average. No CPU bottleneck or NVME bottleneck.

1 Like

It’s probably the network adapter, yours is based on intel X540 and I just spent a good month messing with my network and adapters trying to get 1GB/s transfer to/from my NAS. Finally ended up going with old Mellanox ConnectX3 cards, but imitation cards from Amazon with intel X520 SPF+ ports also seem to work, same with AQC107 based card from TPLINK. I had two x540, one by Dell and another by Supermicro and both were flaky. My systems are mostly AM4 based though. Interestingly least problematic was old intel based x99 as
rock board that most cards worked well with. I think x540 is very sensitive to heat because performance would degrade over time and I was able to get it back after reboots, etc.