Wait, where are the UDIMM 48GB DDR5 Ecc dims?

Probably because x8 from x670 would be still bottlenecked by the x4 uplink from the first chipset to the CPU. CPU has 28 physical and that’s your technical, concurrent bottleneck.


Also, I’m not sure if it is even technically possible to merge lanes coming from different parts of X670, afaik it’s a discrete switch in the PCIe topology, unlike the Intel chipsets.

1 Like

This is my best understanding of how Promontory 21 works, though I don’t have anything rigorous to back it up.

Also, since Promontory 21 is PCIe 4.0, if a merged x8 was possible, it’d be limited to 3.0 x8 bandwidth by the 4.0 x4 uplink. 3.0 x8’s mainly useful for older server hardware, so unlikely to be something desktop mobo manufacturers would prioritize. Closest approximation (which isn’t very close) I know of is the Taichis, where ASRock puts down a PCIe switch to support Intel-style x8/x8 splitting of the CPU’s x16.

  • It is possible to get 8 PCIe lanes to a single PCIe slot provided by the chipset, this was done on exactly one motherboard model before (ASUS Pro WS X570-ACE).

  • It’s correct that this slot is obviously limited by the CPU-Chipset Interface, currently PCIe Gen4 x4 but it is a great way to operate older PCIe Gen3 x8 AICs without halving their available bandwidth by only giving them 4 electrical PCIe lanes.

  • The two main M.2 slots on AM5 motherboards get their PCIe lanes directly from the CPU, not the chipset.

1 Like

I’d imagine this was only possible because this wasn’t with the fragmented promontory chipset that all the newer boards use.

Wouldn’t be an issue, of course you would have to reduce the amount of other devices like additional M.2 slots, SATA or USB 3.2 ports coming from the secondary chipset unit accordingly.

X670E is basically the same as X570, but two chipsets connected in series (that was one of my major disappointments regarding AM5, dampening my enthusiasm for the platform compared to AM4). This secondary chipset has the same PCIe Gen4 x4 chipset interface as an uplink, only to the primary X670 chipset unit and not to the CPU directly.

And the X800 chipsets again being the same as the X600 ones but with a mandatory additional USB 4 controller chipset is just spitting into the customers’ faces.

But well, that’s a market without proper competition for you, maybe when Intel is turning things around again AMD will be forced to at least try a bit again.

I’m pretty sure all the PCIe PHYs are broken up into x4 chunks on the chipsets, and that x570 had a way to share PCIe clock domains to combine into an x8 port:

But I’m not so sure the subsequent promontory chipsets could do that since it was costed down so much from bixby.
Then again, perhaps we just haven’t seen an x8 chipset implementation because it’d only be useful for PCIe 3.0 devices like @lemma mentioned which is becoming a less likely scenario as hardware advances.

1 Like

This is true, though the hygon variants of AM4 also had an extra 4 pcie lanes. There seemed to be 32 (!!!?) lanes total from the am4 cpu on the hygon am4 cpus.

at computex I did see MSI had an AM5 that was x8x8x8 to the cpu ( + x4 to the chipset )
You could theoretically do 8 lanes at gen3 into 4 lanes at gen4 for the chipset. MSI “might” have been doing that for the onboard nic and m.2 slots, they said they’d get back to me.

1 Like

I wonder if the reason for the extra CPU PCIe lanes was to help with CPU to CPU communication for the dual socket AM4 configurations of the seemingly client-class CPU.

Yeah, everything I’ve seen suggests 4.0 x4 chunking for Promontory 21. It’s also a max of 3.0 x4 per chip.

The end 21 has 4.0 x8 potential but, with a 4.0 x4 uplink, it’s unclear to me why support would get die area.

I’m running two of those dimms on an ASRock B650 ITX board at 6000 at 36-38-38 at 1.2V. I tried to tighten the timings but got ECC corrections. If I find some time, I’ll try upping the voltage some more and testing.

Voltage helps those DIMM-s and 1.3V seems to be sweetspot to not run too hot, not leaving perf on the table and not chasing diminishing returns.
Following settings might be one example that works. Note also the modified tRFC and tRC:

Has anyone encountered 64 GB DDR5-5600 ECC UDIMMs in the wild, yet?

1 Like

Still quite a way away. I haven’t seen any 64gb udimms at all, including non-ecc.

And it took quite a while after non-ecc 48GB udimms were available until the ecc ones were. I think only a couple months ago the Kingston ones became available…

only at computex, and maybe tray-sized orders

2 Likes

Wanna get a tray and share it with us?

Might also sell well via the Level1Techs Store since you would (still) have an absolute monopoly offering them to DIY retail customers.

3 Likes

I’d be willing to get even non-ECC UDIMMs, 64GB UDIMMs (ECC or not) seem to have made the news last year and then simply disappeared :frowning:

1 Like

Would 4 x 64GB ECC UDIMMS, all work at 5600MHz/4800MHz? or would it be downgraded to a lower speed?

Also would that be the case with EPYC 4004 CPUs+Motherboards too?

They would probably run at similar performance to 4x32 and 4x48. Amd spec says 3600 MT for 2 dual rank dimms per channel. Some have been able to reach 6000 with much tuning effort, some only around 4000.

Expect epyc 4004 to perform the same, since the chips are pretty much identical.

Even zen 5 still shows 3600 MT as spec for 2 dimms per channel…

Damn that sucks

Hopefully Intel doesn’t have the same problem with the upcoming Arrow lake, and the W880 chipset

2 dimms per Channel will never run as fast as 1 with ddr5. It’s simply physics at these signalling speeds. See also the potential move to CAMM modules in the future etc.

I’m sure newer platforms will improve, but the 1 dimm per channel will improve too, perhaps even more. So then your expectations will go up too. You should lower your expectations or look at a platform with more channels and registered memory support, since those support higher capacity per summ.

1 Like