First time server parts owner and poster here, I scored a a sweet deal on a Epyc 9004 series cpu and 4 x u.2 drives (pcie 4.0) that was too good to pass up.
What options if at all do I have to to minimise the number of the pcie 5.0 lanes wasted whilst still giving them the full bandwidth these pcie 4.0 drives need?
In my head (fantasy land) I could somehow adapt the MCIO pcie 5.0 x8 to 4 x pcie 4.0 mcio ports. I also have a spare x16 5.0 slot.
No, no such thing is possible without active element between pcie slot and device itself (or dedicated device integrated on MB itself, usually present on very expensive workstation boards).
Mixing , switching and pcie lane oversubsription can be handled by PCIe switch, but they are very expensive.
I dont know if there are switches that could handle different PCIE gen on upstream vs downstream, but PCI5 switches are even more expensive.
E.g are m.2 carrier boards that do not require bifurcation, they do so via multiplexing drives connection through pcie switch.
30-50 USD for 4x4PCIE3 with bifurarcation, at least 150 USD with PCIE3 capable switch. Prices go up from there depending on PCIE generation support.
I don’t think there is any easy way to trade generation for lanes, but you can expand on the current 5.0 lanes, but it is very, very expensive if it is “just for fun” - much cheaper to buy a system with more lanes to start with.
What you would need:
Can’t post links as my account is too new, but look for C-payne
PCI-E switch 1500€: products/pcie-gen5-mcio-switch-52-lane-mircochip-switchtec-pm50052
Host adapter 300€: /products/mcio-pcie-gen5-host-adapter-x16-retimer
Cables 6x 60€: /products/mcio-sff-ta-1016-8i-cable-pcie-gen5
Device adapters 2x 75€: /products/mcio-pcie-gen5-device-adapter-x8-x16
So we are talking 2k€++
Yep mobo (since its 9005 Epyc) has x4x4x4x4 bifurcatio. I meant do you know who actually sells a PCIE 5.0 compatible card. I can only find 4.0 ones like this:
Not for fun but yeah too much just for four drives right now. The board I got was intentionally chosen for MCIO to keep the possibility of using his stuff in the future (a year or so away, not right now)
If it actually supports PCIe 4.0 it will work fine with your 4.0 drives - PCIe is backwards/forwards compatible. Plugging a 4.0 device into a 5.0 port will just run at 4.0 speed.
Only 4x drive card I know that’s updated to 5.0 so far is the Asus Hyper M.2 x16 gen 5. Which is, well, Asus. And M.2.
That’s my understanding for LGA1700 processors, yes. Haven’t seen anything about that changing or not with Arrow Lake.
The AM5 (and AM4) boards I’ve built all offer x8/x8, x8/x4/x4, and x4/x4/x4/x4 on PEG. It’s an AGESA thing so all the mobo manufacturer has to do is not cripple the option out of the BIOS dropdown. It’d be easiest for AMD to use the same x16 PHY block on EPYC IO dies and same AGESA config code, so not surprised @Sprout found x4/x4/x4/x4 in the BIOS.
Absolutely, but he is after efficiency and this scenario is using x16g4 and sacrificing x16g4 simultaneously.
In ideal world all new AIC cards would be released as pcie G5 native with sanely sized pcie lane size.
x2g5 or x4g5 is sufficient bandwidth for mainstream to higher end gpu. Only top tier monsters would need x8g5 or more.
Current top of line 5090 perf loss going from x16g5->x8g5 is near nonexistent and x16g5 → x4g5 minimal.
Ref:
Tech jesus subject review also mention lots of general design specifics for a newbie:
Gen 5 theoretically allows us to use more devices without bandwidth compromises, but neither MB designer nor AIC care at all. Its probably cheaper to use x16g4 than x4g5 or x8g5.
Yeah. PCIe 4.0’s 16 GT/s is definitely in the fast enough things start to get fussy zone, meaning more expensive board materials and manufacturing steps. I’m completely unsurprised by the issues with Blackwell not being able to hit 5.0’s 32 GT/s and downgrading to 4x16 links.
Also people have got conditioned to expecting first 3x16 and then 4x16 GPUs. So current gen not going to 5x16 would likely provoke negative customer reactions and probably some amount of adverse review content. Even though the x16 hardly matters to performance it’s mainly perceptions, and not so much engineering, that make sales.
Full 5x16 utilization’s 126 GB/s (63 GB/s each way). DDR5 gear 4 overclocks top out around ~60 GB/s per channel and realistic 9004 bandwidth’s like 25 GB/s. So populating 9004’s 12 channels is theoretically enough for, oh, about 40 busy PCIe 5.0 lanes. With dual channel desktop a GPU that was actually running 5x16 would probably make the rest of the system suffer.
I’d like to see this as well. IMO most X870 and X870E’s disadvantages would be well addressed if the ASM4242 used a 5x2 link and the second CPU NVMe was a 5x2.
Cost’d likely be an issue in good implementations. There’s not space to fit two 2280 NVMes in the first PCIe slot position and signal regeneration’s needed to get PCIe 5.0 far enough past PEG for an NGFF heatsink to clear most dGPUs mechanically. But for a basic drive that’s not going to get loaded enough to run up thermally putting it under the dGPU exhaust just below PEG would be ok.
You can go from MCIO x8 to two U.2 drives, but it wont split out further just because it has bandwidth as others have said. Need an active device for that.
For an active device, this one takes PCIE 4.0 x8 lanes and converts it into either 16 slimSAS lanes or 24 SlimSAS lanes which you can use to connect 4 or 6 NVME drives directly to:
Then cables like these to go from those SlimSAS ports to U.2 ports:
Or you can use those cards with an expander to further expand out the available lanes to connect more drives, but they all share the max bandwidth and so you will cap things out by connecting more. Basically you can expand further to get more capacity but not more speed at that point.
Question though. Those CPUs all have 128 PCIE 5.0 lanes though dont they? So do you really need to be frugal with them? I cant imagine running out of lanes on the MB. lol
I’m using the HighPoint PCIe 5.0 switch, which I’ve successfully used to break out PCIe 5.0 × 8 to 32 lanes. Most of the SSDs attached are a mix of PCIe 3.0 and 4.0.
32 lanes, but it’s in groups of 4 lanes as you said.
ICYDOCK may be releasing their own cables which can allow for either direct-attach SSDs or their enclosure to take advantage of dual or single-lane operation. (Take this rumor with a grain of salt.)