Why PCI Express backplanes are so expensive/unpopular?

Sadly, this.

I wanted an X570 motherboard that could handle a GPU (4.0 x16), SFP+ NIC (3.0 x8), and a USB card (3.0 x4).

Most motherboards only had 2 x16 slots and 1-2 close-ended x1 slots unless I wanted to spend $300+.

There’s a reason ITX has become so popular - most people only need one PCIe slot these days, especially since more motherboards now have integrated wifi.

2 Likes

The only X570 board I know that offers 3x x16 slots and can operate all of them at x8 Gen4 electrically is the ASUS Pro WS X570-ACE. The demand was that high that ASUS stopped selling them a while ago (no doubt in favor of the more expensive Threadripper boards).
I just picked up a third used (for <$200) because IMHO they’re the price/performance leaders with reasonable high IO options.
I operate them either headless with 5x gen4 NVMe + 1x gen3 U2, or with 1x 8x Gen4 GPU, 3x gen4 NVMe + 1x gen3 U2. The chipset PCIe slot I use for Mellanox 56Gb fibre NICs.

WS boards from Asus are really good and very expensive. Every time looking at them, but never bought one.

At this point the “industry” has decided to fleece enthusiasts and businesses with need for “workstation” type computers.
After Intel abandoned their HEDTs AMD stepped in with their enticing but expensive Threadrippers.

Maybe Intel will revive the HEDT in an attempt to recapture a market lost to AMD.

Just to be clear - at the end of 2022 HEDT should be defined as > 40 PCIe lanes (PCIe Gen4 is fine) from the CPU, 4+channel memory. Cores are already available galore and barely can be fed with the IO available on consumer boards.

Heck - I’d be perfectly happy if a board manufacturer took one of their $200 PCIe4 / DDR4 boards and added a PLX chip for a board with 4+ 16x slots that can be bifurcated at a $500-$600 price point (which is well below the current AM5 670e boards).

lol AMD doesnt even really have it, it has pro workstation segment not HEDT

Yep - I tried to imply that this is where enthusiast have to go to move up from the consumer platform. It seems to be working judging from the number of threads on this forum related to Threadripper boards.

1 Like

I think you missed my point: the chipsets do not. For PCIe Gen5, at present the only source is the Intel and AMD CPU SoCs. The chipsets are only on the desktop platforms, and currently neither company supports Gen5 on them at all, even as an uplink. For the platforms that have a comparable number of PCIe lanes as the proposed backplane, there is no chipset, and the boards are simply routing lines directly from the SoCs.

The wishlist for this backplane includes a Gen5 x16 uplink and a Gen5 x8 device slot, which means the multiplexing switch has to be clocked at and capable of handling buffering for lanes at that rate. So with the total number of lanes involved, and the internal fabric necessary to handle packet switching at that rate, it’s basically the same level of engineering as a large portion of the I/O switch included on the not-yet-released server SoCs from both Intel and AMD.

My comment does assume the backplane has a PCIe switch, because that’s the only way it can practically work. It’s just that PCIe switches are not less complex or cheaper devices — the SoCs have them on package because that’s the most cost effective way to get them at all, in the current tech maturity cycle. It’s actually almost inverted: the SoCs are massive I/O engines with compute units attached. Even Broadcom’s (formerly PLX) PCIe switch options have an ARM CPU on package; the Gen5 packages are request-a-quote, the 50-lane Gen4 package is ~$400 in bulk. The chipsets from both vendors use prior generation I/O designs from their own SoCs, and have a fraction of the lane count.

You may be underestimating what’s presently required to transport 1 terabit per second across 64 tiny traces, all in perfect synchronization, and package the data for some other tiny traces, many of which may not be in sync at all. The current chipsets only aim for ~128 gigabit/s.

Going ATX solves the physical problem, eliminating the need for a backplane entirely. It just also requires a platform designed for more PCIe connectivity, which today is generally HEDT, workstation or server (and doesn’t exist yet for PCIe5).

1 Like

ITX + backplane can be a good alternative.

You can pick which gen pcie backplane to use. Gen3 is fine with most people.
If you ever upgrade the PC(change MB), the backplane can be reused, so I would buy it even if it is expensive. Also, since the backplane can be hosted in another case, it also solved some CPU+GPU joint cooling issue.

I got a bifurcation backplane deployed (16 to 4444) .

But currently pcie switch backplane are hard to find and have to design it from scratch with community support.

Agreed. I think to improve reusablity. Actually the MB should only have PCIE slots, no chipset. Let’s say currently 28 pcie lanes from zen4:

8x: The user can pick which chipset to connect to the PCIE. It will be a module.

4x+16x: Also user can also pick what kind of pcie layout they want, by picking a backplane.

With the advent of silicon photonics, those backplane should communicate with cpu(soc actuallly) with fibre, thus reduce power and also there is no signal integrity concern.
Before that, gen 4 signal can now run in 1meter in SFF8654 in some pcie extenders. PLX chip 8+16 lanes is about $100 retail. The end product can be $150. So if there is enough interest, we can pull together community effort to make affordable but still expensive backplane.

I think the $150 per unit is underestimated, the BOM itself would probably be higher.

Let’s say 16lanes of PCIe 4.0 is what needs to be broken out (32GB/s)

What you described is an ATX board with a small arm core for management, a plx chip, and some kind of “transceiver plug” (SFF8654 or perhaps even 400Gbps QSFP-DD plug) , and some kind of power delivery system to each card + a similar PCIe card to sit in the host system to stuff the lanes into a cable so data can travel into the other box.

Assuming there’s a single chip 14nm solution available off the shelf for the large ATX daughter board (are there 28nm PCIe 4.0 libraries available? Should we go with 10nm or smaller?), and assuming you have folks who do motherboard design currently sitting bored (spoiler, no such thing, chip shortage has everyone scrambling for alternative revisions on top of their usual work), you’d end up with:

  1. probably very large probably 6-layer PCB.
  2. Small fancy PCB for the host
  3. Connectors for PCIe, power, “network”
  4. Some kind of vrm solution
  5. Magical controller chips
  6. Cable to link the two boxes

With 2020 pricing, and free labor maybe $250 on the lower end.

1 Like

I don’t think I even care if PCIe 5.0 is supported right now, even latest GPUs from Nvidia are PCIe 4.0. That was just an example, once would be able to buy a backplane they need. Most of my cards are PCIe 3.0 or 2.0 anyway. Actually from my point of view PCIe 5.0 made things worse because it is harder to design and more expensive, motherboard manufacturers went with simpler boards comparing to previous generations.

I disagree, something like X570 has plenty of lanes for many use cases and would generally satisfy my needs. AMD already uses pairs of chipsets with X670E, so why not just deploy them on a separate backplane instead.

That platform doesn’t exist for consumers anymore. Current gen mainsteam CPUs overtake 1-2 generations old (and more on Intel side) HEDT platforms. So HEDT is dead, for a few years in fact.
ATX theoretically has enough slots, but none of the boards you find retail support something as basic as 5 PCIe x4 slots. There is certainly enough CPU + chipset lanes to do that, but it is not worth it to design the whole motherboard around such a niche use case. With backplane containing less logic that is concentrated in a few chips, it must be much more cost effective.

1 Like

I think something like that for $400-500 USD that you don’t need to throw away for years would be reasonable and at scale it can probably go down to actual $250 USD.

Fiber optic connection is nice, but arguably not necessary in more traditional case design, so probably can get away with good quality riser instead.

I want suggesting optics, but using 400Gbps optics compatible connectors for flexibility of deployment and in order to expand the potential market. … 1m QSFP-DD cable is $50-$100… 3m is around $200, f you wanted to put the thing in another room and/or maybe daisy chain these into a fabric of some kind e.g. for ML stuff, you can get off the shelf 400Gbps fiber transceivers and some fiber/glass cable and you’re done.

So you think there’d be market for these, e.g. you’d pay (or would find someone to pay) 100,000 units a year in advance with potential to go up to a million units within a year?

(Lower unit counts are feasible, but you need to spend millions of $ in tooling regardless of the volume… mainly because of the magic chip, which would probably need to go through a couple of iterations with some testing in between)

I for sure wasn’t talking about moving hardware to another room, I’m interesting in prosumer hardware that enthusiasts can buy for a reasonable price.
400Gbps fiber optic connection to another room is nice, but something very few people will need, you’ll more likely to use fiber optic thunderbolt to move the whole machine away if you want to make your room quiet/cool.

Personally, I just don’t think the market is large enough…

… million units means 1/10k people on the planet will have this, maybe 1/5k if you assume half of the world’s population is struggling with basic infrastructure.

I just don’t see 1 in 5000 people being interested in a relatively niche product such as that, where I live.


The other point is that there’s already products on the market that do what you want, kind of… You can buy single socket epyc machines that have few slots or that have a bajilion slots. The motherboard manufacturer is basically selling a PCIe slot extension when going from mini itx epyc board to an eatx board.

1 Like

EPYC processors are not comparable to desktop counterparts. Insane prices and half of the CPU frequency per core. Threadripper chips were closer, but lagged behind in terms of architecture before being effectively discontinued.

It is not like no one makes what I need, it is just bundled in one package for some reason instead of being two independent products.

You asked about the cost of multiplexing 36 lanes to an x16 uplink. X570 multiplexes 16 to x4, which by the numbers is a fraction of the lane count. What is it you disagree with?

My point was simply that it’s not the same scale, and therefore not a direct guide to expected cost.

Must it? EPYC boards are basically just big I/O adapters. No intermediate connectors or cables, so the trace lengths are known and the signal maintenance is straightforward. All the PCIe logic is covered by the SoC, so no components need to be added to the board. This is about as simple as it gets, especially at scale; why would a backplane with its additional engineering and component requirements be more cost effective?

Some of your recent comments seem to be modifying your own use cases and lamenting that they aren’t directly met, so I recognize that these aren’t the answers you want, but I’m responding to what you asked.

1 Like

Stuff like this does exist…
Linkreal GPU Expansion Solution PCIe3.0 4 xU.2 NVMe and Expansion Slot Card Motherboard require PCIe Bifurcation Support| | - AliExpress

One X16 to two X8 electrical X16 physical using bifurcation for relatively reasonable pricing…
image

PCI-E 4 costs a bit more
Linkreal GPU Expansion Solution PCIe 4.0 Retimer and PCIe Slot adapter with SFF 8654 Cables PCIe Bifurcation required| | - AliExpress

2 Likes

Yeah, something like that, optionally stepping down some PCIe 4.0 down to more PCIe 3.0 lanes.
And then the same for PCIe 5.0 in the future.
And, ideally, standardized form factor.

Not so much design work as it is prototyping and offsite circuit board manufacturing.
Those cost are highly prohibitive.

If your firm is large enough to mass produce circuit boards its a different story as you already have the overhead to do it.

In the radio hobby for instance , many good designs for low cost gear fall by the wayside because getting the circuit boards made is more than most people are willing to spend.
For example although many boards are individually not that expensive, its the minimum order requirements that are the deal breaker.
Example: This radio kit costs $45.00 ( not much is it?)
But as a seller or kitter has to purchase a minimum of $5,000.00 or $10,000.00 worth of boards, that quite a roadblock to get by.

1 Like