Why PCI Express backplanes are so expensive/unpopular?

Looking at sometimes dumb selection/arrangement of PCI Express connectivity on motherboards I’m wondering why we don’t have mainstream PCIe backplanes?

It would be cool to purchase any simple ITX motherboard and additional backplane that hosts as many slots as you need, selection of M.2 or other connectivity you care about. That would be a modular design in a way that you would be able to migrate over to the next motherboard just like you typically can with PSUs.

However, looking at server backplanes available, they cost $1500 USD+ even for Gen 3 speeds, which is too much for consumers (not to mention custom case that would be required).

I’d really like to have 1 PCIe x8 Gen 5 slot + 2 PCIe x8 Gen 4 slot + 3 PCIe x4 Gen 3 slots on such a backplane at the same time for instance.

2 Likes

Well, there you go: you want Gen 4 PCIe for cheap. You can’t.

The problem is signal integrity. With each generation of PCIe standard, signal integrity is exponentially more important to the extent that PCIe risers for gen 4 are that much difficult to make over gen 3 they are either rare or very expensive.

I understand that, but even motherboards with the number of slots approaching those backplanes are cheaper.
And they have way more components and features on them.

So I don’t think it is just raw components that are so expensive, my guess they are just for enterprise customers or very low volume, but I don’t get why wouldn’t more people want something like that.

It basically allows you to reduce waste: you purchase, let say, 10G NIC once and use it on any motherboard instead of throwing 10G-capable motherboard into trash because it is already 10 years old and doesn’t support modern processors (I know working motherboards are more in demand over time comparing to CPUs, just an example).

In other words, a 44-lane PCIe bridge capable of operating at Gen5 speeds — AKA 35% of the I/O fabric of a Genoa EPYC, or 55% of a Sapphire Rapids Xeon. Plus the complete board and power routing to support it.

That aside, there’s a simpler physical incentive: backplanes only make engineering sense with form factors that require them in order to avoid other significant costs. For typical PCs, you’d just get an ATX board instead of ITX, because there aren’t any meaningful physical constraints.

3 Likes

That assumes no switches or PLX chips are used and that all slots are operating at Gen 5 speed. I’m totally fine with switches or PLX and lower speeds to some slots.
If chipsets on mainstream motherboards pack all that and more for cheap, then surely it is possible to design a standalone backplane with comparable features and price.

If you look at mainstream boards lately, they often provide max of 2 slots that are wider than 1x, even then manufacturers like Gigabyte have ridiculous 4 slot spacing between 16x physical (I used water cooling, so for me it is waste of space) and only AsRock regularly offers open-ended 1x slots.

So for every generation I have 1-2 boards that fit my use case in terms of devices I intend to connect (2 or sometimes 3 GPUs, 10G NIC if motherboard doesn’t have it, separate USB controller for VMs, etc.).

So going ATX or E-ATX doesn’t solve the problem for me.

Design work is expensive and it’s not a big market - or at the very least, the market is untested. It makes no economic sense to gamble $10-50 million over 2-3 years (my estimate) on development if you can’t earn $100-500 million over the course of next 2-5years.

Simple example: I was looking for AM4 motherboards that can do 3*4x PCIe 3.0 (at least) and 3*1x PCIe 3.0.
There is just a few non-X570 boards. CPU alone is capable of that, but very few boards route it that way.

Same situation with s1200 motherboards too. And I don’t seem to ask a whole not.

I know I can extract PCIe x4 from NGFF slots, but I’d need the case with 15 slots to bit all the adapters.

you need X series board to do it for sure without plx chips its just how the lanes come off the CPU.

Not really, chipset + CPU provide plenty of lanes without PLX for what I’m looking for.
MSI Z170A SLI PLUS supports x8+x8+x4+3*x1, it is a budget Z170 board from years ago.

And chipset already behaves kind of like PLX chip and doesn’t cost a whole lot judging by prices of budget boards.

Where are you getting the extra lanes?

Boards min have 2 nvme pretty much (will mostly be used)
Then you can split the gpu lanes thats it. There arent lanes to spare without extra plx chips

20 PCIe 4.0 lanes from CPU + 16 more PCIe 4.0 lanes from chipset (in case of X570). Plenty IMHO.

Also I had AsRock Z370 Fatal1ty Professional Gaming i7 (stupid name, I know) in the past, it was unique in that era due to ability of splitting CPU lanes into 8x+4x+4x, which I benefited from.

You need plx to split more than 8x / 8x. Like you mention 8x / 4x / 4x And all the other lanes are accounted for Unless you do the X4/x4/x4/x4 all in one slot (easy since no circuit design)

No, PLX chip is only necessary if you want to run 16x+16x from a single 16x. Splitting 16x into 8x+8x doesn’t requite PLX chips, see MSI motherboard manuals, they are the only ones providing block diagrams and showing when there is PLX chip on board and when there isn’t (none of the recent mainstead boards have it).

didnt say 8x/8x requires Plx saying if you want more then 8x8 you need plx. And if you are relying on the 4x of the chipset your not going to generally have all that bandwidth available due to usb (sata if you use it etc)

(See my bad quick wording)

I wasn’t saying I need all the bandwidth at the same time. Running a few PCIe 3.0 x4 devices of the chipset with PCIe 4.0 x4 link should be totally fine.

My point is, the choices motherboard manufacturers are making are weird (for my use case specifically). Having more slots with general purpose lanes or some kind of PCI Express backplane connected instead would provide more flexibility and wouldn’t force you to throw that backplane every time you upgrade CPU with the most basic motherboard.

I think we tend to forget we are the rare users. Most people buy RGB and have 1gpu and just SSDs .

2 Likes

IIRC there are actually two types of PCIe switch chips, multiplexers, and switches. The former can produce more downstream lanes than upstream (like a chipset), the latter simply switches lanes like a railroad switch. A PCIe switch chip is always needed if you have an optional 16x slot or two 8x slots. I believe PLX makes both kinds of chips, hence some of the confusion above.

So speaking of boards, the Aorus x670E Extreme can give you four PCIe slots (two 8x, one 4x, one 3.0 2x) along with four M.2 slots (directly off the cpu). With a heap of M.2 to PCIe risers, you’d probably have more PCIe slots than you care for.

Or you could just buy threadripper (/ fishhalk falls in 6 months?), to give you the PCIe you need.

1 Like

TR = slower to latest gen core and major upcharge (Unless you need memory bandwidth)

1 Like

Risers would require exotic case to install cards properly (in case they have external connectors like NICs and GPUs have).

I ironically was looking into pcie chassis they exists but are at the price you might as well buy TR.