Why are there 4 DIMM AM5 boards at all?

After a lot of study of the motherboard manufacturers memory QVL and the internet, it seems to me that that the chance that 4 DIMMs are working are not zero but small. Particularly for the 4x48 GB combo (if you do not want to run it at 3600 MHz, which is the AMD spec for 4 DIMMS). So, why are these boards produced then at all??

Could I suggest that the mobo manufacturer restrict themselves to producing boards with 4 DIMMs that really work and restrict themselves to 2 boards with DIMMs otherwise?

Since the memory controller is on the CPU, AMD has to be included in the discussion. This should include also memory makers…

I have to apologize, but the whole thing is a complete frustration.

2 Likes
1 Like

They will work, the question is at what speed.

I guess a major reason is upgradeability. If you get 2x16GB you can later add again 2x16 with not too many issues. 4x single rank dimms work way better than 4x dual rank.

On the flip side, there is little advantage to two dimm (1 DPC) boards, unless you want to clock memory very high (8000+).

There is also the possibility of later CPUs on the platform performing better (or firmwares), which we are seeing now. It’s just that people won’t be happy with 4x48GB at 5200-5600 when they see 2x16/24 can run at 7600-8000 or 2x32/48 can run at 6000-6400.

There is a limited number of 1DPC boards, but 2DPC boards don’t have much disadvantage if you only populate one channel, so I’m not sure what you’re looking for? Save 10$ on the board? Save space?

We might see CAMM modules for the next generation, but it seems likely that we’d only get one slot, with basically a single 128 bit wide connection (equivalent to dual channel ddr5). So then everyone will be limited to 96GB (or 128 when 32Gb dies arrive)

1 Like

While I appreciate @quilt 's explanation of the current state, I think the op argues not only that it’s frustrating, but that with knowledge of the current state quite many decisions, especially by motherboard manufacturers don’t look great.

The introduction of AM5 came with a significant rise in motherboard costs. Explained to the consumer with the required higher complexity, etc.

Offering simpler boards with only two dimms (because this is the only memory configuration to achieve commonly advertised memory speeds) should allow for cheaper boards - there is clearly a drop in complexity.

Another frustration with AM5 boards is the layout and routing of PCIe lanes. There is a plethora of AM4 boards (same number of PCIe lanes from CPU) that are better arranged for other uses than “stick one GPU onto board”.

3 Likes

I agree with the arguments.

It’s only that ZEN-5@AM5 - including this channel (for instance Wendell’s todays video on the watercooled ASROCK rack server) are considered nowadays as a workstation/server replacement and are compared to Threadripper systems. 96 GB@5600 MHz two-channel is OKish, but 192 GB@3600 not quite considering the awesome memory hungry AVX-512 unit, which is estimated to need dual channel DDR-5-“20000” to be fully fed (see Zen5's AVX512 Teardown + More...) or alternatively a Threadripper system which has either 4- or 8-channel memory system, but yet only ZEN4 and “AVX512/2”
I reckon that 192 GB ist not really too much for a server/workstation these days. My Threadripper machine has 512 GB RAM…

4 Likes

If one is just looking at consumer level boards, and from a regular consumer perspective, more RAM > faster RAM nearly always. Being able to just drop in another pair of RAM modules to double your RAM capacity has always been pretty neat and useful to be able to prolong the life of an aging system.

At higher levels (“pro-sumers”, special use servers, high end workstations, etc) then the faster RAM becomes more relevant, in which case I agree that you need more special and capable boards.

Even at 3600 MHz hevily used DIMMs would warm up quite a bit. So, if you have an application that requires a lot of memory and uses it heavily, maybe four DIMMs is not so useless after all, you might had to throttle down anyway. Said thet, yeh, in general we would definitely benefit from speedier 2DPC modes.

Or better yet, give us two more memory channels to begin with. But that would encroach on more expensive CPU lines…

It would have been nice if the non-pro threadripper had 48 pci lanes forinstead of 48 gen 5 + 40 gen 4. I’d be interested to know what the price point would be. Sub 1000 CPUs and 500-800 euro/dollar boards?

That said, it’s still 1 channel per CCD. The same bandwidth limitations exist proportionally. Plus the infinity fabric limitations. Capacity of RDIMMs is higher, but the big ones are astronomically priced.

I don’t know if it is the AI craze, or if we are just hitting a wall with signalling and manufacturing.

Apple has a good workaround but it’s expensive too and has zero expandability. Intel is working on on-package memory too for mobile. I hope camm modules can be the solution on consumer desktop but I fear for extensibility limitations there too.

For enterprise HBM + CXL might be the future, but again at significant cost.

Developments will be interesting but I’m worried they might not pan out for enthusiasts. The middle class might disappear or look like the Mac Studio in a decade.

I can only agree but I fear it’s just the limitations of DDR5. Memory bandwidth not scaling with compute has been an issue in HPC for a lot longer but it’s trickling down.

2 Likes

I will not be surprised to see even consumer platforms switch to registered ECC RAM in a generation or two as a way to keep memory speeds and capacity up.

This has been the bane of my existence for the last ~10 years; HBM seems to be the only possible solution for the foreseeable future.

The problem is HBM is super expensive to package:


each stack of HBM2E is 6,303 pins! and with 4-8 stacks for most processors that has to be a nightmare to successfully bond.

1 Like

Usually 4800-5600 in what I follow. 6000’s been hard to keep stable, some builds get stuck lower. But a 1.2-2.0 GT/s overclock’s not exactly slight.

I’ve done desktop 192 builds and a lot of 128s. 2x48 takes some of the pressure off (my workloads pretty much always fit within 96 if I can fix the code) and the 128s can move to 2x64 when that’s available. But it seems to be a fair bit of the 4x48 on upper end desktops would expand to 4x64 too.

In fairness, PCIe 5.0 dielectrics (and, at the high end, 5.0 switches) aren’t cheap. I’m not quite comfortable saying designing for 32 GT/s lanes helps DDR5 hit 8 GT/s. But I’d be pretty suprised if it didn’t.

My ATX experience is 4800’s about the upper bound for quads in common ambients (20-30 °C) before getting into DDR specific cooling. Top exhausting AIOs might be a bit better but I haven’t tested that specific case.

Yup. I’m not sure I want to know what the 7900X and 7950X would cost if AMD’d merged them into the 7960-7970-7980-7990X line. But if AM6 was a quad channel socket for a 10900X or 10950X or whatever without much of a price increase I’d be fine with that. Really really fine.

I’ve had this thought too, even to 32 lanes.

I wouldn’t really say a wall with signaling and manufacturing but it seems to me there’s a Moore’s Law-like slowdown starting across the rest of the platform as well.

Yeah, that’s disappointing. I’ve gotten 2 x 32GB 6000/CL30 G.Skill Neo to work, but 4 x 16GB of the same type is a no go on an Asrock B650E Taichi / 7700X setup. Four sticks will run at 4800, but I can’t comment on stability, as I immediately switched back to the 2 x 32’s. I understand Team Blue has the same prob, so the issue being w/ DDR5 as suggested by someone here may be onto something.

I think a big part of the reason is consumers are conditioned to expect 4 slots now. Right or wrong, a 2 slot board is going to be seen by most as “cheaping out”. Even people who know will still often think “yeah, I won’t use them, but having them is nice just in case”

Right.
I am not sure whether RDIMM performs any better than UDIMM with 2DPC.
For me, JEDEC should drop 2DPC support with DDR5 at the first place.

Yeah, 4 DIMM AM5 can be finicky, especially with high capacity kits. But it’s good for futureproofing! You can start with 2x16GB and add more later. Plus, some folks aim for crazy memory overclocks, which benefit from 2 DIMM setups.

That’s what really needs to happen, but margins are so slim on consumer DRAM that manufacturers will continue to cheap out with half measures such as on-die ECC and CUDIMMs.

The unintended positive consequence of four slots, is that the dims are farther appart when using only two, which improves airflow.

Systems that use rdimms are better than systems which use udimms. Because ECC, and nearly every system that uses rdimms gives you more than 2 channels. My motherboard (epyc genoa 16 core) has 12 dimm slots, and 12 channels, none of this 2 dimms per channel stuff, just more channels.

2 Likes

1 dimm per channel alone solves lots of problem.

1 Like