DDR5 UDIMMs are designed to require an input voltage of 5V, while RDIMMs require an input voltage of 12V. This is in contrast to all the previous generations, where both UDIMMs and RDIMMs had the same input voltage. This results in them being socket incompatible, which is mostly a downside for HEDT platforms.
Why was this designed this way, is there a downside for UDIMMs to have 12V input voltage?
RDIMM power usage is up for DDR5, many of the initial SKUs had 25w power stages which would not be good if the DIMM socket was 5v. With the newest multiplexed RDIMMs, that figure is going to go much higher with 80 component count DIMMs that need to be socket compatible with the more pedestrian RDIMMs.
While a lot of articles I’ve come across say things like this, I’m not sure it’s actually correct. Might take reading of the JEDEC specs to tell.
All of Rambus’s RDIMM PMICs have a 4.25-15 V supply range.
All parts in TI’s PMIC lineup support both UDIMM and RDIMM from 4.5-15V.
All parts in MPS’s PMIC lineup have a 4.25-15 V supply range, though there’s a variant of one of them (MP5431C) that’s 4.25-5.5 V.
Renesas P8910 for RDIMMs takes 4.25-15 V and 3.0-3.6 V. PM8911 for UDIMMs is 4.25-5.5 V.
For the why of the design you might need to have been there when JEDEC was writing DDR5. Full DDR5 PMIC datasheets all seem to require NDAs but there’s a tendency for buck converters to be more efficient at lower supply voltages. It’s minor and probably not an overall net win off an ATX supply’s secondary but may have been a consideration for SODIMMs.
ITX and (m)ATX motherboards’ power configuration may also a consideration. EPS being usually CPU dedicated, that leaves 24-pin. Which is something like a PSU constrained 120 W total on 3.3 and 5 V and maybe 140 W of 12 V. A couple dGPUs and a two entry level USB power delivery ports can pull close to 200 W off 12 V, so putting UDIMMs also on 12 V is maybe not helpful.