10GB read speeds, not so great write speeds

I have two systems, each with an Intel x540 T2 10GB NIC:

System 1:

  • X1900 (AMD Threadripper)
  • 32GB RAM
  • RAMDisk (for testing purposes)
  • Windows 10

System 2:

  • I9 7900x (Intel)
  • 32GB RAM
  • RAMDisk (for testing purposes)
  • Windows 10

I have set Jumbo frames along with the rest of the usual things to improve speed.

Threadripper Machine:
Read/Write speeds on the Threadripper machine using Crystal Disk Mark. Tested by mapping a Windows share of a RAMDisk set up on the I9 7900x. Read/Write speeds are perfect.
Threadripper speeds

Intel Machine:
Read/Write speeds on the Intel machine using Crystal Disk Mark. Tested by mapping a Windows share of a RAMDisk set up on the x1900. Read speeds are great but write speeds are not.
Intel crystalmark

Read/Write speeds should be maxing out here. Previous to getting these Intel NICS, I had two Mellanox Connectx2 SFP cards, I experienced the same slow write speed. I really do not know what the problem is. The issue is definitely not the cards, as concluded above, it’s happened on two separate pieces of hardware.

There is no switch involved, the two machines are directly connected.

Threadripper Machine’s Intel x540 NIC

Intel Machine - x540 NIC

Both cards seem to be running at 8x

Am I right in thinking the problem is on the AMD machine, as that’s where it was writing too
on the Intel machine test, in regard to the slower write speed, no?

Any suggestions would be great, thanks.

1 Like

Have you tried iperf? SMB could be a bottleneck.

2 Likes

I will try that now. However, if it was SMB, would it not show slow write speeds in both directions?

What Presets am i best using for IPERF?

Iperf Test

Threadripper machine was client, Intel machine was the server:

Intel machine was client, Threadripper machine was the server:

1 Like

Well, that is irregular. iperf has arguments to set the amount of simultaneous streams for higher bandwidths as well.
Is a switch in the network path or is this NIC to NIC?
Are there many other PCI-E devices in each system? (All lanes in use)

1 Like

It’s NIC to NIC. Any idea what those arguments are?

Threadripper System: Max of 64 lanes (i think)
GPU x16 lanes
NIC x8 lanes (placed in a x16 lane PCIE slot)
1 NVME M.2 x4 (I think)

Intel System: Max of 44 lanes
GPU x16
2 NVME M.2 x8 total (i think)
NIC x8 lanes (placed in a x16 lane PCIE slot)

Does that look OK? Am i missing any lanes that are automatically taken?

Simultaneous stream/Higher bandwidth test:

Not sure what to make of it :confused:

Why am I only getting 7Gbits/sec, I’m absolutely stumped.

You would need to read your motherboard manual as to which M.2 and PCI-E slots go to the chipset or CPU.
The next place I would look is the TCP offload settings, as the fact that the results change based on which system is the server even though it’s a bi-directional test.

Try a new install of Windows 10 2004 on the threadripper system (delete all existing partitions) if you have a spare drive to boot from, or try a linux live USB distro as iperf is cross platform and hope that a compatable NIC driver is in that distro. The NIC’s are native RJ-45 8P8C and not SFP?

RJ45 yes, the M.2 drives go to the chipset.
I have tried to install the 2004 update allready on the threadripper build, gets to the end, system restarts, and doesnt install the update, :confused:. It’s already installed on the Intel, as of last night.

RJ45 10gig can get pretty hot, especially the Intel server ones. Just to make sure maybe slap a fan at full tilt on each NIC and rerun?

1 Like

In-place Windows upgrades are not the best approach as you can have problems like that, you’re best of to boot from a Windows install USB then delete the partitions and start a new install. Of course after backing everything up to an image with something like Acronis True Image so you can go back if this does not work.

Something is not right with that threadripper system.

Is it definatley the Threadripper system that has the slow write issue?

Act on my suggestions as you will, the fact that the results change based on which system is the server even though it’s a bi-directional test does not make any sense.

So I suspect it’s a problem with the driver, perhaps the TCP offload settings.
The fact that an in-place upgrade fails on that system means there is something wrong with that Windows install.

1 Like

Apologies, i was not saying you were wrong, i just wanted to clarify, from the results i have show so far, that it is indeed the threadriper machine at fault.

Unnecessary, no offence taken.

Most likely.

I will install a fresh Win 10 and test again. FYI, All ‘TCP Offload’ settings are set to ‘Rx & Tx Enabled’ in the advanced settings of the NIC.

Best of luck to you, if it does not resolve the issue, it will at lest cross off some possibilities. The only problem is will be where to go to from there… swap the NIC’s over and retest or get a third PC and test between it and the Intel system then between the third system and the threadripper and compare results.

Threadripper:

Intel:

Why are the buffer sizes different when they have been set to the exact same in advanced settings?

FYI, it’s defo not the NICS, the same thing happened when I had two Mellanox Connectx2 cards directly joined together. Exact same problem on the same machine.

1 Like

How about throwing at least a fan on the card in that system then? It is unlikely but it’s a super quick test to do.

Could be two different SKU’s with different hardware or firmware, check the cards physically to see if they are identical, I know the X520 has upgradable firmware.

That reminds me, you mentioned configuring jumbo frames, it might pay to check your MTU. run cmd then run

ping -f -l 1472 10.10.10.1
ping -f -l 1473 10.10.10.1
ping -f -l 1500 10.10.10.1
ping -f -l 1501 10.10.10.1
ping -f -l 8972 10.10.10.1
ping -f -l 8973 10.10.10.1
ping -f -l 9000 10.10.10.1
ping -f -l 9001 10.10.10.1

if an MTU of 1473 or 1501 fails then jumbo frames aren’t working.
if either an MTU of 8972 or 9000 returns a ping then jumbo frames are working. (assuming a 9000 MTU jumbo frame)
You will also need to try both ways with 10.10.10.2 or whatever it is.