Modern consumer Motherboards: Where did all the PCIe slots go?

Hmm, my most recent builds have been

slot 1 2 3 4 5
PEG x1(16) x1
PEG x1 x1
PEG x4(16) x1 x1 x1
PEG x1(16) x1
PEG x4(16) x1 x1
PEG x4(16) x1 x1 x1
PEG x4(16) x1 x1
PEG x1(16) x1

Technically I see one of the four slots every day I’m in the office. None of the x1s has ever been used, though.

I mean actual PCI slots:


and it’s a new production board too

3 Likes

I have to get by with 28 lane CPUs. :upside_down_face: x2(4) would be more useful than two x1s anyways.

Too bad we never really got a good implementation of PCIe TDM, that could have made it so PCIe lanes could be shared/overlap among slots and not need an expensive packet switch.

The wider socket spacing and taller contact geometry makes signal integrity on a two slot PCIe 2.0 TDM harder than DDR5-5000 2DPC. PCIe 3.0 TDM would probably require a equivalent of stacked CAMM2 and I suspect that’d get pretty iffy at PCIe 4.0’s 16 GT/s.

Broadcom pricing makes switches more expensive than they need to be but, if higher speed TDM’s even possible, switching’s probably cheaper. With PCIe 5.0’s socket to socket range being like 10 cm signal regeneration’s likely to be needed anyways.

If AMD or Intel felt funny, they certainly could hand Broadcom a big L in this department.

Given where AMD is with X870E, cost reduction moves would make sense if the boards are selling well enough to continue offering PEG switching. ASMedia should have most of the pieces if AMD wants to put them together for Zen 6.

Intel, my sense with Z890’s there’s less interest. Not really seeing any reason the chipset couldn’t have had an x8 downlink as well as uplink, which would have enabled x16 + x8 for dual GPU, GPU + HBA, or similar.

I didn’t actually read the thread (sorry), but wanted to add some recent observations:

I’ve come to appreciate that M.2 and x4 slots are very interchangeable via cable adapters or adapters which come already in an M.2 form factor.

SATA ports have also gone missing (or are to few), so there are cheap €50 M.2 6x SATA port M.2 adapters that eat 2 PCe lanes and fill that gap rather nicely for those who still want dozens of TB e.g. in RAIDZ1 or 2.

Likewise 10Gbit NICs based on Aquantia/Marvell ACQ come in an M.2 form factor, or can be used via an M.2 to PCIe x4 cable adapter, or on top of a bifurcation adapter in low-profile configuration: I use all three variants a lot.

The latter via a bifurcation combo board splits the single x16 slot you find on a lot of Mini-ITX boards into two M.2 and one x8 slot on top, which then can hold low-profile NICs, or hardware RAID controllers, or connect even a big dGPU via a riser cable. On my Minisforum BD790i, a mobile-on-desktop Mini-ITX with 16 Zen 4 CPU cores (Ryzen 9 7945HX) that lacks any SATA ports, I then use that ASmedia M.2 adapter to get 6 SATA ports on the outward facing side (plenty of room) of that bifurcation adapter, while the inside facing (tight) one gets another NVMe drive.

I’m still somewhat angry that nobody sells an AQC113 10Gbit NIC as PCIe v4 x1 (it’s always at least x2), but then those x1 slots have completely gone missing… At least you can get 5Gbit USB3.2 NICs now that actually deliver >500 MByte/s thanks to USB 10Gbit speeds.

I guess traces are getting much more difficult to do at PCIe v5 and above than cables, because equal length and capacitance is tougher in 2D (traces) than 3D (cables), yet cables and connectors have their own overhead and pitfalls.

We are simply hitting physical limits here, it isn’t all market segmentation or rip-off.

But there is also a lot of creative small vendors, especially from China, which find novel ways to make things work.

They don’t have to offer extra PCIe 5.0 slots. PCIe 5.0 is just for storage and/or the latest graphics cards―both well-served by the existing slots offered. Given that a lot of PCIe add-on cards (in CEM or M.2 form factor) are some generations behind, a solution where the motherboard splits PCIe 5.0 lanes out to additional lower-generation lanes (e.g., PCIe 3.0) is the path of least resistance; it’s at least a lot easier than having every possible device somebody could want to update to PCIe 4.0/5.0. They need some minimum bandwidth, but won’t bump up the PCIe generation to reduce the number of (limited) lanes occupied.

Switches like the Broadcom PCIe 5.0 switch chips don’t have to add perceptible latency either unlike AMD’s Promontory series of chipsets. I’ve benchmarked Optane SSDs connected to a HighPoint Rocket 1628A and found nothing that looked like a latency or bandwidth drop.

Problem is the price of the switches though. The cheapest is about $325 on Mouser―and they are overpriced―which probably means a motherboard using one would have a significant bump in price. This solution probably has to be solved by a better chipset, one which is native PCIe 5.0 and isn’t so stingy with additional PCIe lanes.

1 Like

I wonder how could you have only the GPU ?
I have at least sound card and LAN to put extra and they are a must.
We all know intergrated sound cards can’t compete with a dedicated sound card even nowadays and NIC they put in the motherboards are mostly 1 Gbe and I need 100 Gbe to have faster bandwidth to my NAS.

1 Like

If you were to guess, how many percent of consumer motherboard purchasers consider a dedicated soundcard and a 100G nic a must?

This forum is a peculiar user base, and for us there’s workstation and server orientated product lines that we can jump on. Though we do resent the price.

1 Like

Look at the Asrock B850 Livemixer, it has 2 x4(x16 length) gen4 slots, admittedly fed by a PLX splitter.

Since the switching’s between M2_3 and PCIE3 both the PCIE2 and PCIE3 x4 slots can be used simultaneously, albeit within the uplink’s x4 constraint.

100% agree.

Dedicated sound card isn’t necessary anymore today. Everything from sound over BT up to networking is covered by the board itself. And that stuff is plenty for 99% of all buyers.

We, among other (small) communities just remember the good old days with all the slots…but we often forget about the fact that we payed as much for a full-featured desktop as we do now for a server.
If I account for inflation, systems with 6x PCIe slots and all the lanes cost as much as desktops 20 years ago. Back then, 2-sockets were for workstations (or servers), 4-socket were the big ass server boards.

Today, we have a new budget tier: The consumer boards. 250€ with built-in SATA, Sound, WiFi/BT, NIC(s) and RAID? I wish I had that 20 years ago where I payed 6000 on a fully kitted out Gaming machine with SCSI controller, drives and Voodoo card. That’s probably 10k+ by today’s standards. You really have to push it to get more than 10k on a single GPU Threadripper/Xeon system.
Server board with 7x PCIe slot with full CPU lanes, MCIO, IPMI and shit is like 700 more or less. That’s cheaper than desktop boards “back in the days” with inflation in the mix.

Don’t mistake modern “budget-tier” with the desktop/HEDT platforms of the past. The old desktop/HEDT is gone and was merged into workstation/server to account for the remnants of the “old desktop class”. Because 95%+ of the people using Desktops 20 years ago are now using laptops, phones, consoles or mini PCs.

That’s why everything is a “Gaming PC”…it’s the largest share of the remaining 5%.

You don’t make or sell hardware for a couple of nerds, you go where the most people are. And the most people don’t need or want a dedicated sound card or 100GbE. They don’t need PCIe slots and they don’t know about U.2/3 NVMe either.

The acrobatic move in this community is to still get away with the budget-tier by creative thinking, knowledge and planning. But you can’t always get away with it, because we’re not the target audience. And that’s why our servers have Bluetooth, HDMI/DP and not enough slots for 25/100GbE.

I get 9.8-10.5k checking a few different inflation indices.