DDR5 48GB modules, will the work on current Ryzen AM5 motherboard and with 7950x

Hi, maybe someone knows will there be bios update to support 48GB modules or current Ryzen 7000?
And at all are the 7950x cpu’s will work with this amount of ram. I need 4x48GB to 192GB.
Thank you

Most probably this will be taken care of by the next chipset upgrade e.g. the X770 boards coming out with the Ryzen 8000 products. Reason being, I just do not see the market rushing out to support this right now. As for your use case, wait for Ryzen 8950X or get a Xeon/Threadripper build.

1 Like

if you’re chasing 192gb of ram you probably should be targeting EPYC or Threadripper to get both the IO and memory bandwidth unless you know you have a very niche workload.

192gb of ram is all well and good but that ram won’t fill itself - the data needs to come from/go to somewhere.

That ram will work on SP5, I don’t know about AM5.

So it seems only intel 13gen will work with this ram this year.

Maybe someone knows will we see only 48gb modules ddr5 ram for desktop and laptop like now is getting this year or we will get 64gb modules like was talked from the start what ddr5 will be getting us like 2 x capacity and faster speeds? We got some speed increase but then we got 48 vs 64gb capacity why we need 128gb able capacity laptops using 2 sticks if the client needs it we can buy 64gb ram server sticks but for desktop 48gb? Thank you for any response.

1 Like

Yes I saw that , seems that we will get 192gb support but will we get 4x64gb ram of ddr5 sticks like was presented 3 years before like crucial was presenting ddr5 advantages?

Probably at some point… DRAM node scaling is slower lately than compute scaling. We’re still early in the DDR5 cycle.

Well DDR5 didn’t change much it that regard. DRAM is a problem child for about a decade now. And I don’t see any changes any time soon. DDR5 only really keeps the underperforming DRAM curve going. There is a reason why chip manufacturers use more channels or on-die DRAM, 3D-stacking cache, HBM or whatever. Everyone knows DRAM isn’t keeping up, compared to pretty much everything else.

About time we get triple/quad channel on consumer boards for bandwidth and hope someone invents some lower latency main memory so we can phase out DRAM at some point.

I’m pretty sure we will. But with the usual 4x dual-rank DIMM penalty resulting in 3800-4000 MT/s because of 2DPC on non-RDIMMs. At least you can get 128GB with good clock speeds. I personally treat my board as if it only had 2 DIMM slots, because 2DPC just sucks.
48GB DIMMs are released or will soon be, 64GB is only a matter of time as the technology (I believe it’s the 32Gbit dies that were missing previously. “normal” DDR5 are all 16Gbit dies) is there. 128-512GB DDR5 DIMMs will probably be exclusive to server platforms (RDIMM).

Three months ago I would have said yes, but meanwhile I have my doubts.
I wait for 2x48GB with good timings, that’s just right for me, I think 64GB DIMMs with good timings means waiting for ZEN5.
Four high capacity DIMMs on a DDR5 consumer platform is a recipe for headache, at least for the time being.

Thank you I only hope for laptops 2x64gb=128gb can be a standard and up to 192-256 for desktop.

Actually, given the size increase of L3 caches, this could theoretically get pretty good pretty soon. Imagine having something like 256 MB of L3 cache and then a PCIe 6.0 interface (remember PCIe 5.0 is fast enough to rival DDR2 speeds) with a 4x m.2 delivering two DRAM channels and two data channels. The OS core (drivers, scheduler, resource allocation, virtualization) could definitely fit inside that L3 cache with no problem, and the m.2 DRAM sticks would be fast enough to solve most computing problems in a reasonable fashion.

Not sure if we will ever see that, but given the 128 MB L3 Cache of the 7950X3D, I don’t think it’s an unreasonable path to take and we will probably be there in two or three generations of hardware.

which increased latency by quite a bit I must add. There is a reason why SRAM is kept small. Those huge SRAM stacks are only feasible because DRAM sucks so bad. And the fact that AMD didn’t have to increase die space to do so.

96MB is available for one CCD and 32MB for the other one. No core can access 128MB. And this is important. Marketing likes to add all caches together to get a bigger number. I think for AMD it’s 144MB or so. It’s like calling my CPU a 60GHz CPU because 12x5GHz. These are numbers for dumb people so they buy stuff.

Why M.2 form factor? M.2 in non-mobile platforms is cancer. And good luck running your memory off four PCIe lanes located at the outer edges of the board. Won’t work.

With CXL we now can add memory to PCIe devices because of improved bus specifications. But latency still makes this a subpar choice. It’s better than I expected, but right now latencies are on par with grabbing data from the worst NUMA domain you can imagine. Which is fast and amazing by itself, but certainly not a replacement for your standard DRAM setup.

1 Like

Yes, but latency is not the end-all be-all of everything. Neither is throughput. Frequency determines throughput and if that is high enough you can offset a lot of high latency - especially if you have two or three ports working in tandem. Of course random reads will be a bitch, but most accesses are never random, especially not with memory paging. With 8k pages available, latency is not as much of a problem as most people think.

Sure, for some specialized workloads you need low latency memories. For everything else, some extra latency can be easily mitigated - think bulk loading like textures from RAM to GPU or from Disk to RAM.

Simply put, because a PCIe 6.0 x4 port is the same bandwidth and throughput as a PCIe 4.0 x16 port.

There is merit in moving everything to an m.2 form factor and it just does not make any sense anymore for full x16 ports. We don’t have anything that can saturate the throughput. It is likely that motherboards soon will start seeing 2xm.2 to PCIe x16 adapters, especially with the rise of riser cables.

Fact of the matter is that in consumer space, PCIe x16 only has one real driver left, and that is the GPU - and even there a dual m.2 to PCIe x8 riser cable should be good enough for a vertical mount GPU. Everything else can be done by m.2 slots just as well as PCIe, and it takes up less space to boot. In the professional space I bet things are more complicated though.

I think if you need TONS upon TONS of RAM, you don’t need low latency RAM. And if you need low latency RAM, then you’re not working with very large datasets. In the general case, of course.

At least one respectable UK outlet has 48GB DDR5 DIMMs listed in stock: https://www.scan.co.uk/shop/computer-hardware/memory-ram/3490/3676/3675

Actually all the L3 is accessible by all the cores. There is just a higher latency penalty for reaching into the opposite CCD. It’s why there are reference to inter-CCD latency. The hit really only effects especially latency sensitive workloads like gaming though. The type of datacenter work they were built for in Epyc-X probably doesn’t even care.

1 Like

here we go, waiting was worthwhile!

Gigabyte X670E AORUS Master 192 GB DDR5-6000 Memory:

AM5 4x 48GB at 6000MT/s

Right now, it is lucky that you can get 128GB running even at 4800MHz. It looks too good to be true.

1 Like

we will see, It’s with a new AGESA and the newest ICs from SK-Hynix.