This could be the end of DIMMs for mobile computing with Intel following Apples lead and while this isn’t good for upgradeability or repairability it can mean huge gains in performance for iGPUs and other memory intensive tasks. As a techie and a user of this in Apple Silicon (I use all platforms Linux included) I find this very exciting to see what this Die Packaging technology can bring to the X86 architecture. Im imagining a Microsoft Surface Pro as the perfect candidate for this Die packaging. Bringing huge performance gains in iGPU and other memory intensive tasks on a platform that’s already NOT upgradable or easily repairable.
What’s everyone else’s take on this? Do you also think this is the future and what kind of performance do you think we will get from the iGPU? Let me know in the comments below.
I think this is a welcoming evolution. Mobile PC and smartphone techs are converging on many ways. Such memory techs started with smartphones and existed for a long time. Opinion about Apple in general may be polarised but seems the PC guys need Apple to kick their ass for a change. It happens again.
I haven’t seen this news elsewhere. So this post breaks the news for me. In a typical Intel fashion. Here is what I think this will happen:
For ultra-portable notebooks, the processor package will be used as-is without additional SO-DIMMs or Dell’s version of DRAM modules. Also, many memory dies can be packaged into a single chip, so possibilities are out there the total size could be higher than 16GiB for a two chip config.
And then Intel could also allow additional So-DIMMs. In that case, there are two further possibilities how the on-package LPDDR5X can be use: a) as part of the system memory; b) used it as level-4 cache as you suggested. Either way there seems to be pros & cons.
Also worth mentioning the advantage of PC side of thing adopting on-package memory.
Apple is so set in stone about its so-called “unified memory architecture”. It basically turns everything laptops, desktops and their version of “workstations” into a giant iPhone… Apple lets go the team that proposed a Server SoC a few years ago. So realistically I don’t anticipate anything interesting happen on Apple desktops/workstations in the next few years.
On PC side, due to its economy of scale, if Intel/AMD has the will, lots of interesting things could happen on both laptops, desktops and HEDTs by adopting on-package memories!
By looking at the Intel patent mentioned in Tom’s hardware, the LLC (likely the L4 cache) is inside or connected to the compute tile. The iGPU tile is distinctively separate. Coupled with the news from Phoronix you linked, it makes sense that the LLC (likely the L4 cache) cannot be shared by the iGPU.
Also the DDR5/LPDDR4/LPDDR5 controller is distinctively separate from the above mentioned “LLC” in the patent’s diagram. So we can rule out the LPDDR5X DRAM in today’s news is used for the rumoured L4 cache. Also worth mentioned, if the rumoured L4 cache is through the IMC, then there is no reason iGPU cannot utilise it. This will contradict with the Phoronix article’s finding that Meteor Lake’s iGPU cannot use the L4 cache.
So what conclusion can we speculate? I would bet the LPDDR5X memory in today’s story is not the L4 cache.
This is a good one. I was thinking about exactly this when I posted above. Though my impression was from last year or the year before last, before Sapphire Rapids launch.
I was engaged in a long conversation about what a possible Apple silicon SoC for Mac Pro could look like (way before its launch this year) on another place.
I think a consumer version of HBM/Sapphire Rapids is possible in future.
It prevents consumers from upgrading memory or use higher capacity. Higher memory is locked into higher priced CPU. Additional die space required means that you pay a lot for a minimal amount of soldered memory.
Want a 16 to 32GB upgrade? pay +$1000 for new CPU. Can be had for $100 now.
Keeping memory capacity low just means limiting use case and innovation to low-end memory applications. We need more memory, not less.
on-die memory is anti-consumer and anti-choice and pro sales promotion. The carrot being minimal amount of bandwidth (additional memory channel does that too) and latency.
This allows the CPU corporation to be your only RAM vendor. You and/or your SI/OEM have to comply with Intel/Apple/whoever policy.
On the one hand, I’m an iGPU enthusiast, and this will definitely make them perform a lot better. I can’t stand laptops with dGPUs, they produce way too much heat and become way too bulky, and at the cost of battery life, which on non-Apple laptops is already pathetic. So I welcome it.
On the other hand I use a lot of RAM. 32GB is the bare minimum I can get by with, and Intel are definitely going to charge Apple-tier money for these things. Maybe a tiered memory solution could work, where the processor has the 16GBs integrated as a sort of L4-cache the iGPU gets priority access to, with DIMMs on the motherboard to let me actually run my VMs and applications.
I think this makes a lot more sense in the context of Ultrabook, where soldered RAM is not uncommon (presumably MX is just a more expensive U series). It’s bad for upgradability, but I don’t think this makes it worse for the status quo.
Though, I do wish if they start tackling this upward (i.e. -H, -S) they would allow these on-die RAM to be used in a similar manner to their HBM counterpart.