Return to Level1Techs.com

Future of pc memory. Why do we still have ddrx?


#1

As far as my amateur understanding of such things goes the closer and faster memory is, the faster computers can do things. Cpu’s have slowly incorporated more and more sub components decreasing latency and increasing performance. If you’re old enough to remember the revelation that was amd bringing the memory controller on die rather than on the north bridge you’ll know how much of an impact that can have.

We’ve seen faster speeds and greater bandwidth, but there is still a physical distance. I understand that if you were to put 16gb of ram on die the chip would be ridiculously expensive and enormous so that isn’t practical. And of course the on die cache is larger now than my first pcs total ram. But the form factor of sticks that are a good inch or more from the cpu has persisted since then.

My question is when we have technologies like silicon interposers and hbm2 with graphics cards, is there a practical or physical reason we haven’t seen similar developments in main system ram? Would it just be too expensive? I appreciate that it would limit upgrades, but are we going to see that in the not so distant future?

I would think with the large improvement in performance you can get from increasing cache that memory installed on an interposer or whatever Intel is calling their technology for putting amd graphics next to their cpu, could memory installed in this fashion be used as a last level cache, most programs can run inside 4GB of memory, would running that along side your traditional 16GB plus main memory make a significant difference?

Or are we going to have to wait for stacking technology that would allow memory to be layered on top of the cpu or something more exotic like that to see the next leap in latency and performance.

I suppose at the end of the day it’ll all come down to money and inertia in the market like everything else, but after seeing lots of innovation along those lines in the GPU space I’m surprised I’ve not heard of much in the works for main memory.

Is the absurdly parallel nature of graphics processing the only situation where you can make use of the extra bandwidth and latency? One would imagine that if businesses are willing to spend thousands on server cpus would it not give a significant boost to a cpu with dozens of instruction hungry cores? Would it not be worth the additional cost? Or is there something in the works that I’ve not heard about yet.


#2

I think you’ve already answered your own question, afaik it is indeed a price issue. HBM2 is one of the reasons rxVega is rather expensive, so as a result CPU manufacturerers will stay away from it. Furthermore, PCs nowadays are fast enough for the average consumer, which also leads to more incremental changes than anything else. (For most people ARM devices are fast enough). Also, before the launch of AMD’s Ryzen line, there was no competition in the CPU marekt whatsoever. Take a look at the number of cores since the inception of Intel’s i3|5|7 line up until now and you’ll get what I mean.
I also think that the reason the high performance line-up doesn’t use HBM2 for instance is that this would need a different architecture, hence this approach isn’t feasible.
In the end, I’ve no doubt we’ll get to HBM, but first we’ll have to get to DDR5…

For average consumers, this isn’t that bad. Sure there is a difference in DDR3 vs DDR4 (vs DDR5), but for average scenarios the is is negible. In gaming, GPU is (almost) all that counts :wink:

Edit:

If you truly need this extra performance, most of the time you can just build a GPU cluster.


#3

Do you think moving more memory closer will be the next natural evolution? So we just have to wait for yields go up and cost go down then we’ll see that happen? Are we just looking at incremental updates in single thread and 'just add cores’™ or can we expect a jump or a new technology in the next few years?


#4

Well, cache sizes have been increasing over CPU generations. For example a FX-8350 has 4x2MB L2-cache and 8MB L3-cache, wheras a Ryzen 7 has 756KB L1, 4MB L2 and 16MB L3.
But right now I think “throwing cores at it” solves many performance related issue and even games make use of more and more cores (AC:Origins). I don’t think we’ll see significant jumps in this area, simply because the resulting chip would be too expensive and as a result wouldn’t yield desired results.


#5

I’d also wondered about this, especially as you find servers with terrabytes of main memory.
And yet with the dozens of SKU’s out there, none seem to push past MB’s of ram.

Perhaps it’s a modular thing- one can scale racks of dimms, and each customer can choose their level.

Perhaps now Intel is getting serious with Optane in the storage game, they may look at different ways to innovate their CPU range (now the panic is mostly over from Zen’s launch) rather than just itterating, (tick, tock, tock, “REFRESH!”)


#6

Fair enough, that makes sense, I suppose we have to wait for silicon fab progress to make the additional close memory worth while.


#7

Yes I’ve always been a bit of an AMD fanboi but I’m so pleased that they’ve raised the game for everyone in the cpu and graphics space. I wonder if there is would be customers for those chips with more on die memory but just not enough to cover their costs.

Perhaps with their new multi chip packaging will lead to additional chips with more cache, like the… Was it crystal well Intel intergrated graphics?