Intel Meteor Lake with on Die LPDDR5X - The end of DIMMs could be near

Hello Everyone,

Interesting hardware news topic, Intel has just announce a Intel Meteor Lake P-SKU that features on Die LPDDR5X giving the platform 120GBps of memory bandwidth using 16GB of Samsungs LPDDR5X-7500.

This could be the end of DIMMs for mobile computing with Intel following Apples lead and while this isn’t good for upgradeability or repairability it can mean huge gains in performance for iGPUs and other memory intensive tasks. As a techie and a user of this in Apple Silicon (I use all platforms Linux included) I find this very exciting to see what this Die Packaging technology can bring to the X86 architecture. Im imagining a Microsoft Surface Pro as the perfect candidate for this Die packaging. Bringing huge performance gains in iGPU and other memory intensive tasks on a platform that’s already NOT upgradable or easily repairable.

What’s everyone else’s take on this? Do you also think this is the future and what kind of performance do you think we will get from the iGPU? Let me know in the comments below.

Cheers!

1 Like

I think this is a welcoming evolution. Mobile PC and smartphone techs are converging on many ways. Such memory techs started with smartphones and existed for a long time. Opinion about Apple in general may be polarised but seems the PC guys need Apple to kick their ass for a change. It happens again.

its more of a Level 4 cache

1 Like

Mmmmm… from what I’ve seen it will be used on motherboards WITHOUT ram present, so it’s not part of the cache tree. It is the systems addressable RAM.

got a link for the mobos

16GB seems a little low for laptops going forward

I do see it fitting for a SBC though

I haven’t seen this news elsewhere. So this post breaks the news for me. In a typical Intel fashion. Here is what I think this will happen:

For ultra-portable notebooks, the processor package will be used as-is without additional SO-DIMMs or Dell’s version of DRAM modules. Also, many memory dies can be packaged into a single chip, so possibilities are out there the total size could be higher than 16GiB for a two chip config.

And then Intel could also allow additional So-DIMMs. In that case, there are two further possibilities how the on-package LPDDR5X can be use: a) as part of the system memory; b) used it as level-4 cache as you suggested. Either way there seems to be pros & cons.

update:

Seems Tom’s Hardware carries the news: Intel Demos Meteor Lake CPU with On-Package LPDDR5X | Tom's Hardware

1 Like

Also worth mentioning the advantage of PC side of thing adopting on-package memory.

Apple is so set in stone about its so-called “unified memory architecture”. It basically turns everything laptops, desktops and their version of “workstations” into a giant iPhone… Apple lets go the team that proposed a Server SoC a few years ago. So realistically I don’t anticipate anything interesting happen on Apple desktops/workstations in the next few years.

On PC side, due to its economy of scale, if Intel/AMD has the will, lots of interesting things could happen on both laptops, desktops and HEDTs by adopting on-package memories!

There was a rumor about “Adamantine” L4 cache (LDM) a while back:

And Phoronix also spotted the presence of L4 on Meteor Lake in one of the patches (and that iGPU can no longer use LLC):

On Xeon Max, it is possible to configure the on-die HBM in either HBM-only (use 4x 16GB HBM as 64GB of system memory), flat mode (HBM appears as separate NUMA node(s)), or cache mode (HBM as L4).

I wonder if this is a consumer version of what they did in Xeon Max, i.e. configurable (by OEM? By user?) as either system memory or as L4.

1 Like

By looking at the Intel patent mentioned in Tom’s hardware, the LLC (likely the L4 cache) is inside or connected to the compute tile. The iGPU tile is distinctively separate. Coupled with the news from Phoronix you linked, it makes sense that the LLC (likely the L4 cache) cannot be shared by the iGPU.

Also the DDR5/LPDDR4/LPDDR5 controller is distinctively separate from the above mentioned “LLC” in the patent’s diagram. So we can rule out the LPDDR5X DRAM in today’s news is used for the rumoured L4 cache. Also worth mentioned, if the rumoured L4 cache is through the IMC, then there is no reason iGPU cannot utilise it. This will contradict with the Phoronix article’s finding that Meteor Lake’s iGPU cannot use the L4 cache.

So what conclusion can we speculate? I would bet the LPDDR5X memory in today’s story is not the L4 cache.

This is a good one. I was thinking about exactly this when I posted above. Though my impression was from last year or the year before last, before Sapphire Rapids launch.

I was engaged in a long conversation about what a possible Apple silicon SoC for Mac Pro could look like (way before its launch this year) on another place.

I think a consumer version of HBM/Sapphire Rapids is possible in future.

1 Like

Finally, some more details

It looks like Intel is aiming to replace DIMM slots for the new MX SKUs (for ultrabooks) to save some board PCB space.

I do see this critically.

It prevents consumers from upgrading memory or use higher capacity. Higher memory is locked into higher priced CPU. Additional die space required means that you pay a lot for a minimal amount of soldered memory.
Want a 16 to 32GB upgrade? pay +$1000 for new CPU. Can be had for $100 now.

Keeping memory capacity low just means limiting use case and innovation to low-end memory applications. We need more memory, not less.

on-die memory is anti-consumer and anti-choice and pro sales promotion. The carrot being minimal amount of bandwidth (additional memory channel does that too) and latency.

This allows the CPU corporation to be your only RAM vendor. You and/or your SI/OEM have to comply with Intel/Apple/whoever policy.

On the one hand, I’m an iGPU enthusiast, and this will definitely make them perform a lot better. I can’t stand laptops with dGPUs, they produce way too much heat and become way too bulky, and at the cost of battery life, which on non-Apple laptops is already pathetic. So I welcome it.
On the other hand I use a lot of RAM. 32GB is the bare minimum I can get by with, and Intel are definitely going to charge Apple-tier money for these things. Maybe a tiered memory solution could work, where the processor has the 16GBs integrated as a sort of L4-cache the iGPU gets priority access to, with DIMMs on the motherboard to let me actually run my VMs and applications.

1 Like

I think this makes a lot more sense in the context of Ultrabook, where soldered RAM is not uncommon (presumably MX is just a more expensive U series). It’s bad for upgradability, but I don’t think this makes it worse for the status quo.

Though, I do wish if they start tackling this upward (i.e. -H, -S) they would allow these on-die RAM to be used in a similar manner to their HBM counterpart.