Convert 16x PCI-E 5.0 Lanes to more PCI-E 4.0 Lanes? Is it possible

I’ve been a software engineer and been quite knowledgeable on the hardware side for over 20 years but I’ve never needed a complete understanding of PCI-e lanes till I got into AI and their hunger for GPU power. (LOL) So please forgive me if this question sounds crazy.

Is there any way to convert a total of 16 PCI-e 5.0 lanes to 32 total PCI-e 4.0 lanes using PCI-e switchers or something of that nature? What I’m trying to do is create an AI system with 4 GPU’s running on AMD’s X670 platform.

I would think it’s possible in some way since the max speed of PCI-e 5 is double the speed of PCI-e 4. That way these 4 GPU’s could run with 8 lanes each. I’m trying to avoid the expensive cost of thread ripper if I can since really what I am after is PCI lanes not more memory or more cores. Even a $1000 add in card would be much cheaper than getting a new threadripper system.

1 Like

It’s possible but it will most definitely cost more than just buying a WRX90/W790 system, probably more than buy the system twice over.
Also adding in PCIe switching will likely drive the need for software and driver changes because they aren’t as transparent as marketing would lead people to believe.

1 Like

Would have to crawl datasheets for Broadcom PLX-chips, gut feeling tells me while the bandwidth is there, the generation-translation from Gen 5 to Gen4 is infeasible for economic reasons.
If you “just” want PCIe-switching, Broadcom PEX88000 series exists, now you just need to either sink thousands into making your own PCB or finding a server-part that has one on it.
(just the chip is 300€/$-ish)

This.
Had interesting issues with an audio-interface card because of a PCIe to PCI interface chip. That was fun :upside_down_face: to diagnose!

1 Like

Questions like these have been around since the first PCIe 5.0-capable consumer motherboards came out. This pretty much sums it up:

To illustrate the economic disincentive, take PCIe 4.0 for example. If we take a finished product like the Adaptec HBA Ultra 1200p-32i ($820) or Broadcom P411W-32P ($657). I take the lower of the two HBA’s prices as representative of a basic PCIe 4.0 switch AIC, which makes sense as the latter is also less embellished with non switching functionality. Then factor in the cable costs—the guaranteed good stuff—which hovers around $100 each. Your switching solution alone comes out to $1,057 to boot.

That amount nearly buys you an ASrock C621A WS ($700) with a Intel Xeon 4309Y ($600). Admittedly, it’s not a fair comparison if looking at CPU benchmark figures between the performance of a run-of-the-mill consumer CPU and this thing, but an AI system would be reliant on the GPUs and the PCIe bandwidth to them, which this has plenty more of (64 PCIe 4.0 lanes in total spread over 7 slots, all of which can physically accommodate x16 cards). For a PCIe 5.0 consumer system with the PCIe switch and cable set-up to be worth it, the cost savings of the system components would have to be greater than the switch and cables. The $1,057 estimated above is a theoretical price floor for a PCIe 5.0 switch with cables since 4.0 < 5.0, and they’re not going to price their latest and greatest lower than their previous generation tech.

tl;dr:

I just had to do the napkin math. :slightly_smiling_face:

1 Like

one of those OSS PCIe 5.0 switch backplanes would do it:

3 Likes

Thanks everyone. I had no idea if there was a way. I’ve certainly learned a lot in this post. I had no idea some of these solutions were available not to mention the cost of them.

For $4,190 :face_holding_back_tears:

…and that’s one of the cheaper options

1 Like