Months continue to roll by with an absolute dearth of information related to putting 4x32 into a consumer system and knowing exactly what to expect. Weeks after new 24/48gb modules arrived, and months after MBs gained 4x48 support, not a single mainstream outlet has demonstrated or benchmarked it.
I have a developing hardware emergency on my end, necessitating a new build, and I’m completely trapped in what to do. My current 64gb device often hits swap (aggressively). 96gb might work but I’d probably still have to close a few things before loading some large files. 128gb would be nice if it means I can keep my developer tools and environment open while debugging through problems in said large files.
If I need to purchase a 13700 with 128gb of memory today, what can I reasonable expect? I’m not interested in any form of overclocking or gaming but I do like to keep my devices for as long as possible meaning some amount of performance would be appreciated. Will this forever be living at 3600MTs or is something reasonable like 4400MTs plug-and-play on Intel nowadays?
Yes and no Yes for their articles and to peek at what they’re comfortable selling but a no on price for someone like me.
Their most recent article looking at DDR5 compat. was over a year ago and for just 12th gen[1]. I’ve been waiting for a follow-up to that for a while now but none seems coming. And of course nothing on the newer modules yet either. It does look like there’s nothing to worry about except I do a lot of traditional compiling and was hoping to see a proper compile benchmark from another party to confirm (not shader compiles).
However, it is great that they’re even shipping 128gb configs so it must work in some capacity. I just have no idea what to expect. No idea what they had to do to get it to work. No idea if they’re just leaving it at 3600 etc.[2]
Since a similarly equipped Puget system’s device is literally 2x the price as DIY, they’re well out of budget for actual purchase.
Aye, thanks for reminding me to check that thread. I was following it about a month ago but indeed there’s been some progress since.
Unfortunately I’m looking for Intel here actually - I should have made that more clear sorry. And it seems like AM5 is still a quagmire of playing games with timings/voltages/and memtest which is far away from just working. And super confusing really. It almost seems like 4x48 was easier to get running than 4x32 in that thread… what a mess
I switched to an MSI Z690 mpg and Upped to an 13700k for my server… Right now i have 2x32GB crucial 4800mhz in it. I buy another 2x32GB end of the week and Hope for the best
Kingston has a few 4x32 kits with XMP, which should have a very high probability of working out of the box. Corsair has a 4x48 kit. They are not the fastest at 5200-5600, but probably the safest bet to get a hassle free setup.
I just see that the asus proart z790 even has the corsair kit on the QVL for 13th gen processors.
128GB is no issue with current consumer hardware, as long as you have four sockets all you need is 4x32GB DIMMs which have been around for years.
I’d question the need for DDR5, though, because you’d really need a rather specific workload to benefit from the bandwidth difference at these RAM sizes. If you have poor locality in your data but lots of it, chances are you’ll be mostly latency bound and the difference won’t do much.
I like RAM because I run lots of VMs, simulating the operation of physical systems, so with DDR5 being crazy expensive last year, especially with ECC, I just stuck with DDR4-3200 ECC, which was cheaper then DDR5 without ECC and used a Ryzen 5800X3D for easy ECC support and to compensate a bit for DRAM latency and bandwidth (also because it was so cheap, I could not resist).
It’s beyond 128GB where terra incognita starts on dual channel consumer hardware today, but if you’re really memory bound you can hunt for a Xeon 2nd hand bargain instead, where ECC registered DRAM can be ridiculously cheap in comparison but where scalar performance may easily be half of what even brand new notebooks will peak to.
The combination of peak scalar CPU power and RAM capacity remains expensive, if you only need one of them and are ready to go with last gen (e.g. Ryzen 5000 and DDR4), you can save a lot of money.
Thanks for the update there. Glad to know it worked for you, but sadly… I lost 2 days this past weekend trying to get mine to work.
Once I was reasonably sure my base 4800 was ok, I tried to enable XMP to get my rated 5200. Wow it did not like that at all. I’d get into Windows and just freeze randomly sometime later. I lost 2 days because even after turning XMP back off things were still not quite right and I’d freeze several hours later. Seemingly needed to do a full CMOS reset to get things back to proper.
So at least for my 13700 + ASUS Prime z790 + Kingston kit, it was not useable at 5200 after just turning on XMP. Maybe I’ll try again some months from now but currently I value stability over the 1-2% extra perf I might get for what I’m doing here. Still sucks though that things are not plug and play in many instances.
You can try to set the voltage manually first, 1.35V then reboot and then try XMP
OR first voltage and then manually to 5200MT and let it train for it selfe.
Maybe for another weekend sometime when I can stomach the downtime Yes, I see that you went to 1.35v so that would probably be my first tweak. My kit defaulted to 1.25v per its spec but maybe that’s too low.
The Hynix M-Die 24Gb modules seem to be much easier on the memory controller than their 16Gb M and A die modules. I wouldn’t recommend getting the Corsair 192GB kits as those are not using Hynix 24Gb modules, I think the best option would be to go for 2 kits of 2x48GB TeamGroup or G.Skill. The G. SKILL 2x48GB 5600MHz CL40 kit at 1.1V should be M-Die 24Gb modules, but I can guarantee the higher binned kits by both G.Skill and Team are