PCIe Lane Issues

I think I am out of PCIe Lanes on my main machine, causing my NVME drives to run slower, and mostly my 10gbe SFP networking only to run at around 200 to 300 mbps.

After months of diagnosing my home unraid server with 10gig networking, late last night, something just popped into my head. On my main machine, I think I am I out of PCIE lanes, and sharing them, causing my 10gbe mellanox2 card to run like shit, or my NVMEs drives to also operate slower then they should (I was basing my testing around these NVME drives) and now I’m not so sure any of my testing was accurate at all.

The setup of my desktop is as follows:
3900x
Asus Tuf x570 Plus Wifi
NVME 1 Samsung 256g 970
NVME 2 Intel 660p 1gig (newer, replaced another 256 samsung 970 pro)
1080ti
Melanox Connect X-2

According to what I have read, the 3900x has 24 PCIE lanes, 4 of which are used as an interconnect to the x570 chipset, so you have 20 lanes you can use. I have confirmed the 1080ti is using x16, so I only have 4 left at this point. 2 NVMEs use x4 each, and the Melanox Card is x8 pcie 2.0 card, but I have read that these cards function poorly when not in 8x mode even though 4x bandwith should be enough to cover the 10g at pcie 2.0.

Is my board doing some weird bifurcation between the 2 nvmes and the melanox card that has been causing my issues all this time? I can never get over 200 to 300 mpbs on my LAN, on my 10 gig card. Confirmed with iperf, and NVME flash on both sides between my server with NVME Cache drives, and this machine I am referring too.

1080ti = 16
nvme1 = 4
nvme2 = 4
Melanox2 = 8
Seems I need 32 Pcie Lanes, and only have 20 to work with? Anyone have experience with this?

You have 16 lanes from the PCH also, so my guess is you’re trying to squish too much data through that x4 link from the PCH to the CPU all at once. If you can find a block diagram of your board in your board manual or somewhere else, then you should be able to find out how the lanes are allocated between the two in order to better balance things out.

I’d probably start with trying to run the 1080Ti at x8, so your Melanox card can use the other 8 lanes. Unless you’re hammering both SSDs at once, then they’ll probably be fine sharing the x4 link from the PCH.

Thank you for the reply, I will look up my board and see if there are any schematics. That being said, how can one determine how many PCIe lanes a particular device is using? I know how to find what the GPU is using, but what about other devices like my network card, or the NVME drives?

On linux it’s part of the output from lspci -vv

On windows, I’m not sure what the equivalent would be. Some quick googling points to Windows Management Instrumentation commands, so maybe have a look there.

Seems my BIOS will not allow me to adjust the speed, just the Generation of PCIe it runs, IE Auto, 4, 3, 2, 1.

This is really the only documentation about lanes in the manual.

Also, there is this setting in the bios, which I am not sure is related, but it was the only other PCI related setting in there:

Yes you’re “running out” of PCIe lanes.
Your GPU will always be 16x (from CPU) because there is no PLX switch.
Your second 16x slot is only electrically 4x from the PCH, so your 10Gbe card only runs at 4x anyway.
Your one NVMe SSD is 4x from the CPU and your other is 4x from the PCH.
There should be however enough bandwidth left if you don’t also hit other devices connected to the PCH, as the interconnect between CPU and PCH is Gen4 but your devices either use Gen3 or Gen2 both with only 4x.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.