Which RAM generation is best?

DDR1 was the best:
It doubled the initial speed of the memory computers had access to at the time.
It had capacities 4-8x that of previous generation. 512mb-1gb sticks were common, even some 2gb sticks. Previous gen was 64-128mb sticks usually by the end.
By the end of it had extremely high clocks compared to previous memory. Previous was 133mhz as the largest standard and ddr1 had up to 660mhz by the end with 400 being most common.
Had extremely tight timings. The best kits were 2-2-2-5-1t and could sometimes oc lower.
Oh and it introduced dual channel for further doubling bandwidth over previous gen

I dont think there has ever been a generation of larger improvements over a previous gen, but DDR5 will come close if not surpass it by the end. It bringing far more capacity increases than typical, quad channel on 2 dimms, and will end with more than double bandwidth speed than previous gen.

2 Likes

Hell yeah, I “downgraded” once from a P4 with “fast” DDR2 to an Opteron dual core with a Corsair XMS 3200CL2 DDR1 kit and it killed the P4 at everything.

1 Like

Might have to go and look around for some old systems and have my way with them :grin:

Not sure if it was the best, but SDRAM was my favorite; getting away from those jank SIMM sockets was nice.

It was kind of cool how they just started stacking chips on top each other for more density too:

Totally forgot about that stacking haha

I put my vote on DDR3.

  • With DDR3 we finally had some semblance of standardization between manufacturers and vendors.
  • The price fixing era of DDR2 had been squashed, oversight and regulation on manufacturers was more strict, and quality improved dramatically.
  • Almost out of the gate DDR3 was pushing 2000MT/s (with applicable nForce chipset) and prices on the standard 1333MT/s 4GB kits stabilized below $100. By the end of its run the fastest DDR3 was still roughly equal bandwidth and latency of the average consumer DDR4 kits in 2017, 10 years after introduction. (Provided of course a golden IMC and motherboard that could support such speeds.)
  • Compatibility was insanely good: while DDR2 almost entirely cleared up the need to run matching DIMMs to guarantee proper performance, DDR3 could effectively be grab-bag mix-and-match and still run at rated speeds. Intel finally adopting an IMC for their processors certainly helped with this (as AMD had proved this kind of reliability was possible for years) but more importantly their DDR3 IMCs continued to support their MCH feature of asymmetric multi channel, “Flex Mode”, which allows any set of DIMMs of any speed or capacity to be paired up into both a symmetric multi-channel and asymmetric single-channel. This is why you start seeing more laptops from around 2010-2011 suddenly featuring strange 3/6/12GB RAM configurations using mismatched SODIMMs.

Emphasis on “at JEDEC speeds”, but more importantly “at IMC rated speeds” because JEDEC validated 3200MT/s in 2017, and yet Zen 1/Zen+ struggled tremendously with anything over 2666MT/s (and still do if you run 2Rx8/2DPC). Haswell-E also cannot run the highest JEDEC standard speed without overclocking in quad-channel. Many systems until 2019 are limited to lower speeds by default, with Intel’s 6th-9th gen Skylake derivatives sharing the same 2666MT/s limited IMC.

512MB PC133 SDRAM was absolutely a thing by 2001 when most machines were moving to DDR. I still have some on a few boards.

With about two systems ever that support those speeds, and none that support it in dual channel without winning the silicon lottery. Unlike later DDR standards, DDR1 did not have any sort of SPD profiling. The manufacturer may have tested a maximum of 330MHz on their test equipment, but nobody made a memory controller that could handle those transfer speeds 1:1, and often you were left running 4:5 or lower offsets, effectively kneecapping the speed of the RAM by constraining the bus its attached to to a lower transfer rate than the RAM itself. (I.E. MCH at 400(3.2GB/s) while DRAM is at 660(5.3GB/s). Besides that, any DDR kits over 600MT/s were exclusive limited edition DIMMs made for bragging rights, similar to DDR2-1250 and DDR3-3200. The fastest commercially viable DDR1 was 500-550 with 500 already being a decent challenge for 1:1 MCH:DRAM below 3v.

As far as I know only the CAS latency could go lower, to “1.5”, and was a marketing and SPD trick from GEIL. Only a few NF4U boards supported it with Athlon 64, and no Intel boards officially worked with anything lower than CL2. Otherwise the rest of the timings had to effectively remain at 2-cycles in order for DDR to function properly.

Dual-channel DRAM was introduced with RDRAM in 1999 on Intel i840, then again at the end of 1999 with SDRAM on ALi’s Aladdin 7 chipset. nForce 420-D didn’t introduce dual-channel DDR until the end of 2001.

Opterons killing Pentium 4s in everything is a story as old as the release of K8 Opterons. No amount of faster DRAM/MCH will save a P4.

3 Likes

Very well thought out Fouquin. Still having ddr3 system as you know I have had great times and memories in how much these sticks just work(and work well)

They are easily beat now but man o man after Aida tests my ram sticks best ddr4 2666 and almost 3000.

What hurts DDR5 more so is the higher speeds increase latency, you’d need DDR5 at 6500/7000 to truely be an upgrade from a DDR4 CAS15/16. Based on how CAM2 memory shift is going, DDR5 might have a shorter shelf-life than Rambus.
Competitively on CAS numbers, DDR2 timing varied from mobo to mobo and it sparked more people to adopt DDR3 out of frustration. A late era DDR3 Haswell/Broadwell crushed Skylake.

CAMM2 is DDR5

Even though it is DDR5, CAMM2 is more of a bridge path to DDR6 for higher density and lower latency memory. This direction places consumer desktop in an very unknown position.

1 Like

That’s got me thinking about memory packaging vs latency.
Counter intuitively all the super tightly packed memory computers like Apples M series or even the HBM Xeons have over 100ns of memory latency… I would have assumed they’d have good memory latency with how close the memory is to the CPU.

So far it seems to be the fire breathing DDR4 systems that have the best latency using standard DIMM layout.

Based on the direction of Intel, chances are HBM of 16-32GB paired to an Ultra 5/7 is in the roadmap due to Apple’s M series CPUs. Xe paired to HBM would maul the crap out of AMD APUs :ghost:

Except that clock cycle latencies have not increased very much, while the bandwidth ceiling has increased dramatically. Bandwidth is king with the core count increases seen in the last few years. See: any topic on eDRAM, LPDDRx, HBM, etc.

70GB/s at 90ns is a larger benefit to performance than 55GB/s at 49ns in any modern machine with 8+ cores.

Yep. That isn’t an accident. See above.

The problem with allowing high latency memory is that core concurrency dictates it will reduce maximum bandwidth.
Concurrency issues are why the HBM Xeons lose over 50% of their theoretical memory bandwidth when used in any real world workload.

Main limitation you still see is going past dual sticks results in XMP issues on desktops, look at the number of returned matching pairs of AMD memory sold on Microcenter bundles in Ryzen 7000. On laptop Ryzen 7/9, some models take a 15% performance drop using 96GB of memory yet CAMM based Intel laptops don’t have that drop and run perfectly at 128GB of memory. Desktop replacement level stuff, avoiding Ryzen 9 is reasonable.

Not sure what any of that rant about AMD has to do with DDR5 being faster than DDR4?

Correct. Core concurrency and latency limits are the bane of random walks to DRAM. Streaming data across multiple cores will eat bandwidth on the other hand, and latency becomes the lesser bottleneck. The HBM2e equipped SPR Xeons run face first into the bandwidth limit under HPC workloads that fit within the 64GB limit, but still experience an uplift in performance.

Not a rant on AMD, just about the difference of the memory controller performance gap. If a desktop AMD box requires jumping through lots of user info in running four sticks at XMP, vendor compatibility lists has always been an after thought(both sides). DDR5 has similar XMP hell that DDR2 had, you don’t know when a motherboard vendor certified list is even reliable to follow.

XMP came about with DDR3. NVIDIA had EPP for DDR2, but it was limited to their chipsets because it was done through their own qualification process. Just as with every generation of DDR there are early and late era platforms that feature support for different speeds of DDR. DDR2 for its part was extremely straight forward; anything above PC2-8500 was overclocked, and anything above the MCH or IMC reference clock is not guaranteed to work. If you bought an i955X board on release than don’t expect 1066MT/s DIMMs to run at that speed because the MCH is limited to 667 no matter which third party vendor board it is.

As for running 2DPC… Yeah, that’s been a problem for a LONG time. Intel’s earliest DDR3 IMC in Nehalem has 2DPC scaling issues above 1600MT/s and officially only supports 1066MT/s for this very reason despite 1DPC configurations being more than capable of running over 2000MT. Later with Broadwell-EP they offered a 3DPC mode that requires running DDR4 at 1866 to work properly because the IMC could not handle the extra strain of a third DIMM across four channels. Today both AMD and Intel offer guidance on what does or doesn’t work in 1DPC/2DPC modes regardless of XMP. When they put the spec on their page they are explicitly telling you THAT is what works in a fully populated, fully strained scenario.

1 Like

Never OCed DDR2, just encountered many board vendors that certified brands had changed their chip vendor at some point–some AMD Phenom boards wouldn’t boot using four sticks, some ASUS boards required memory using tighter timings(both AMD & Intel). On prefab there are makers that only prefer SK Hynix or Samsung sticks, ASUS(desktops/laptops) and Lenovo are famous for this during DDR3/DDR4–could be the same on DDR5.

I love that this question is deeper than “what should I put in my build?”.

DDR4 gets my vote because of a recent upgrade cycle I did. I pulled some sticks out of a 6th Gen Intel NUC to do an upgrade, and on a whim, decided to see if the SODIMMs in a Ryzen 5000 series laptop were compatible.

Swapped around, booted, and it Just Worked™.

I credit DDR4 directly with machines having a meaningful working life of over a decade. Old machines benefit so much from having massive, inexpensive RAM being so ubiquitous.

1 Like