My ideal AM5 PCIe layout

Perhaps someone here can tell me why AM5, and desktop CPUs in general, are so limited when it comes to PCIe lanes. Is there a hardware reason or do CPU manufacturers not think we need more?

Working with what I have and using my X670E Pro Art as an example. I would happily give up the two PCIe 4.0 M.2 slots to get one PCIe 4x8 PCIe Slot instead of the PCI 4x2 slot that is there now, sharing lanes with one of the PCIe 4.0 M.2s. Additionally, I would like just one 5.0 x 16 PCIe slot, unless they’re feeling generous and want to make both of them full PCIe 5.0 x 16 without a need to split them. making the bottom slot a PCIe 4.0 x 8 would make it useful for all kinds of things, including an HBA card.

I am not sure what a person needs 4 M.2 slots for anyway and if a person wanted fast storage, and lots of it, what better way than an HBA card and a backplane or three from Icydock? Or an M.2 carrier card in a second full speed 5.0 x 16 slot.

In conclusion, the current PCIe layout of desktop AM5 motherboards is disappointing, near useless, and could be so much better. Maybe I should just build my own motherboards…

1 Like

Basicly a motherboard with all its pci expention exposed

Computers did look cooler with all those expantions cards in the 486 days

6 Likes

on that topic i want the bios in a socket again 2.

2 Likes

My guess is that AMD forces certain features. There is one chipset that goes by different names depending on how they are chained (or not) and which features are part of the set. For the high end at least, they are mandated to have 4 PCIe 5.0 lanes go to an M.2 slot. For the 800 series at the high end, an additional 4 lanes goes to feed USB4 ports.

As for why the remaining lanes don’t get split up, my guesses…

  • They think less lanes per slot will hamper the lower generation PCIe devices. PCIe 3.0 × 4 is a bottle neck even if PCIe 5.0 × 4 is equivalent to PCIe 3.0 ×16.
  • PCIe switches are expensive, so ain’t nobody building one into a motherboard. The economics favor just getting a workstation or server motherboard.

I really wished more add-in cards followed the Samsung way with their 990 EVO Plus SSDs―less lanes of a newer generation or more lanes of an older generation. The bandwidth remains the same, but on newer hardware that can bifurcate to narrower links a newer PCIe generation effectively yields more device connections. That would cater to the needs of more users, old and new hardware alike.

1 Like

EPYC on AM5 is the worst offender

IDEALLY we’d have 1x m.2 through PCH PCIe Gen 4 at most
2x MCIO gen 5.0 x 4 (because redundancy is real in the enterprise
and ALL the rest is spent on a x16 or split x8 and anything leftover on PCIe slots with no onboard audio, video, and a single x4 through PCH for networking, IPMI, etc…

My Pro Art mobo has 2 PCI 5.0 M.2 slots, from the CPU, and 2 PCI 4.0 M.2 slots from the chipset. It simply makes no sense considering a PCI 4.0 M.2 works just fine in the PCI 5.0 slots. What the Motherboard manufacturers have done is made the rest of the PCI lanes essentially useless unless you want a bunch of M.2 SSDs.

edit: And I haven’t even touched the 4 SATA ports.

1 Like

Market segmentation and manufacturing optimization. All client CPUs have the same amount of PCIe lanes (24, usable). I.e. either it’s a 200$ 7600 or 700$ 9950x - all get the same amount. Why? Producing single layout/spec is cheaper than producing many (that would probably need different flavors of IO dies, PCIe IPs, etc). Thus, to make it profitable - here is your segment, a client CPU with 24 lanes. Want more? Here is Threadripper, 88 lanes, but the cheapest is 1500$

5 Likes

Basically this ^^

How is it that none of the manufacturers offer a board with just a couple of usb for mouse and keyboard and a load of bifucated pcie slots and then let you bundle in some nice fully supported expansion cards? You would think someone would differentiate themselves with a bk “have it your way” option. Pcie 5 x16 seems like such a waste of lanes.

It’s what AMD decided to require for B650E, B850, and up. So not the mobo manufacturers’ fault. Also the

  • IO die has 4x 10 Gb + 480 Mb USB capability (often not all used)
  • Promontory 21 has { 20 Gb | 2x 10 Gb } + 4x 10 Gb + 6x 480 Mb USB capability (also often not all used)

some of which might as well be brought out to ports because the hardware’s dedicated that way and offering less USB doesn’t mean more PCIe lanes.

Generally there’s a chipset lane assigned to Ethernet, another for WiFi, and 2-6 for SATA. Everything else off the chipset is brought out to slots or M.2s. With X870 and X870E AMD mandates a 5.0 x4 M.2 and USB 4 from the remaining CPU x4. With B850 5.0 x4 M.2 again mandated. Plus apparently also 4.0 x4 M.2 as there don’t seem to be boards bringing the 4.0 x4 out to a slot.

Standard Level1 forum gripes about this are

  • There’s the x16 PHY and x4 PHYs but AMD didn’t put any x8 PHY and PCIe 5.0 switches to break up PEG are expensive.
  • Too many M.2s and not enough slots.

ASRock LiveMixer’s one of better, albeit limited, mitigations for the latter.

Not available for purchase (yet) though. Let’s hope it’s not vaporware.

Is it that the mobo manufacturers have to meet minimum pcie usb etc requirements if they want to use the chipset or if they want to market it as a x870e etc? They can’t offer a “lower end” board and add the second bridge and use them for whatever they want?

Nice find! Although something was lost in the exchange for better PCIe options: not much external connectivity offered (e.g., USB4, USB 3.2 Gen 2x2, 10 GbE Ethernet).

Yes, the chipset )

1 Like

Supermicro has am5 motherboard with interesting pcie layout. Cost is quite prohibitive but I was eyeing it for vfio server.
https://www.supermicro.com/en/products/motherboard/h13sae-mf
It has 2 x16 slot (divides into x8 x8 if you use them both)
x4 slot and m.2 slot
You can connect whole 3 pcie cards to this board!/s
20250119_21h20m38s_grim
(no idea why block diagram mentions 2 M.2 slots, they are not on the photos)
EDIT: it has 2 m.2 slots. I just didn’t see it in the photo

I wish they gave us second x4 slot and routed one or two x1 lanes to m.2s form the chipset. (like gigabyte mc12-le0 did)

it’s got the m.2 slots, and they are both 110mm so you can put the good m.2 drives in them.

M.2-C1, M.2-C2

Here they are

Mentioned somewhere else, another interesting board is b850 ai top - three x16 (mechanical) slots, x8/x8 from the cpu and a dual 10gig ethernet

yea, I didn’t see it.

also, 4x slot on the board comes from chipset.

I bought the X670E Pro Art because I wanted all the NVMe slots. I don’t mind the 8x8x lane configuration on the top two 16x slots. I have a 7900XTX and I may get a second GPU later.

I also like the on-board 10 gbe.

The top two NVMe slots are great for games and fast storage. The two PCIe 4.0 NVMe slots are fine for my 3.0 P1600X Optane boot/OS/VM/DB drives. I don’t intend to use the bottom PCIe slot.

I do wish it had more than 4 SATA ports. I’ve got those filled with slower SSDs for media storage.

DFI RAP310-B650 is a pretty interesting board too. Which appears to be able to do 8x4x8x4x.

I like the MSI D3051(D3051GB4N-10G). It has a pair of 10Gb and a pair of M.2 running at x4 while also having a full x16 slot.

Shame we can’t get any x670e/x870e boards with IPMI.