Motherboard PCIe lane allocation

So I was thinking about motherboard chipset layouts and PCIe lane allocation. It seems since SLI and CrossfireX went away that CPU vendors have been reducing the amount of PCIe lanes their desktop CPU’s provide (cough, Intel cough). Standard CPU’s show ~20 lanes while older CPU’s could have 40 (dual X16 slots plus a third x8 or X4 slot).

To even use secondary PCIe slots is to cut the primary GPU down to x8 mode seemingly. Why don’t CPU vendors provide more lanes from the CPU to allow for PCIe m.2 adapters to utilize four (x4) m.2 SSD storage drives for example, instead of two (x2) secondary m.2 slots through the chipset :man_shrugging:t2:

Is there a reason for reduced device bandwidth?

Lane counts got greatly reduced when we went from the northbridge providing lanes to the CPU providing most of the lanes. The Chipset still provides lanes, but very few and you are constrained to 4 lanes of actual bandwidth shared from everything off the chipset now.

PCIE lanes take a good amount of transistors and take a lot of pins. Check out this die shot to see how much space the PCIE takes up:


If we added another 16 lanes then all the PCIE would take up more space than the memory controllers do. Anything that uses IO takes a lot of CP{U socket pins too. 16 lanes use something around 80-90 pins last I heard, and having 16 more lanes routed around the motherboard is also costly.

edit:

Looks like 82 pins, but the slot is actually two sides and the pins on either side of the card go to different things so it is actually 164 pins required for 16 PCIE lanes.

3 Likes

I just posted a topic similar in a way to this one but maybe you can help me understand what I’m trying to do. I want to add a second GPU to run physx and lossless scaling but was wondering exactly about PCIe bandwidth distribution.

I want to have my main card (5090) on slot 1 which is x16 and my second card (2060) on slot 3 which is x4, will the 5090 be gimped of it’s bandwidth? And if so will that reduce it’s performance?

@EniGmA1987 mentions the main reason that lane counts appear to have been reduced on consumer platforms.

In truth modern consumer platforms aren’t that bad on PCIe lane counts right now, it’s just motherboard manufactures make weird choices and omit useful PCIe configurations.
Intel’s current consumer platform can do 48 PCIe lanes and AMD’s can sort of almost do 36-ish.

Depends on your motherboard, but I would guess the PCIE 3 slot is likely coming off the chipset as CPU attached lanes are generally x16/x0 configuration or x8/x8 when you put a second card in. Or often you can now split these x8/x4x4 using bifurcation for PCIE SSD cards. So if your PCIE 3 slot is x4 already I am just expecting it probably isnt attached to the same set of lanes the GPU is on.

I think the problem is more with motherboard manufacturers than with the CPU. They seem to want to jam every available lane to M.2, Wifi, USB 4; essentially marketing bullet points rather than flexible but boring PCI-E slots. I don’t know the electrical/computer engineering behind this so hopefully someone who does will chime in, but I can imagine pushing the bandwidth generations can also make it more challenging to keep as many lanes? I wouldn’t think you can just wholesale up the lane gen and “poof” you have twice the bandwidth, but I really hope it’s not as simple the other direction of to get 1 gen 5 lane it literally has to replace 2 gen 4, because that would be an insane waste of potential I/O bandwidth.

It does. PCIe 5.0 doesn’t really reach past PEG without signal regeneration on board as a 5x4 M.2 directly under dGPU exhaust’s thermally pretty useless. MCIO could be used to get farther, but cost and CPU socket adjacent space and cable routing aren’t necessarily better than on board.

While 5x1 and 4x2 (and multiples thereof) are bandwidth interchangeable they’re not electromagnetically interchangeable. Maintaining signal integrity’s easier with fewer lanes to synchronize, which I suspect is one of the motivations for decline in slots’ electrical lane counts.

I expect this’ll all intensify with PCIe 6.0.

Well, the ASM4242 mandate on X870(E)'s from AMD. So that one’s not on the mobo manufacturers. And not putting down wifi or Ethernet only frees two lanes (three if it’s a dual Ethernet board). So it doesn’t really matter.

Since ATX is a seven slot form factor thermally viable options for accommodating one or two 2.5-3.5 slot dGPUs and three M.2s are pretty limited. What I’d consider the best card layouts available are ones that limit dGPU+second card support and avoid under dGPU M.2s.

position B650 Steel Legend B650 Live Mixer
1 CPU 5x4 M.2 CPU 5x4 M.2
2 PEG PEG
3 ½ wifi M.2
4 ½ wifi M.2
5 chipset 4x4(16) CPU 4x4(16)
6 CPU+chipset 4x4 M.2s 2 chipset 4x4 M.2
7 chipset 4x1 chipset 4x4(16)

The B850 versions of the boards push the 4x4 slot down to position six or seven, presumably to accommodate larger dGPUs. I’m not a fan, particularly with the Steel’s CPU 4x4 M.2 moving up to position five.

So, overall, I suspect the M.2. thing is more about current dGPU sizes meaning there’s nothing else to do with the lanes than with mobo manufacturers being in love with M.2s. And if the NVMes are really going to run to potential the additional M.2s are pretty useless with most current controller and NAND packages on drives as sub-dGPU thermals are inadequate. So some of the lanes of dual Promontory 21 or Z890 are kinda useless.

It seems to me there’s a market niche open for a board laid out on the assumption of at most a two slot dGPU, maybe two slot x8. But it doesn’t look like any manufacturer cares enough to try it.

There’s also the two unassigned x4 CPU PHYs on AM5 and LGA1851 IO dies. Still a few boards that bring one of those down to a 3x4 slot, though it seems like 5.0 support pretty much means board dielectrics that permit a 4x4 slot far enough below PEG to be useful.

1 Like