CES 2025 MegaThread

Razer have dissed Intel with their New 16" Lappy. Team Red have the Plan… Times are a Changing :slight_smile:

Zotac’s upped the this GPU protects you from the way it’s put together marketing for 5080s and 5090s. Not sure they’ve much in the way of alternatives with Nvidia requiring 12V-2x6 and, even if dual’s allowed, there isn’t a way for a card to ask for two 300 W links instead of one 600.

only problem is most boards limit dual dimm per channel to 3200 for ECC UDIMM
So you’re comparing 128 GB to 96 GB @ 5000+ MT/s

Still a nicety, but with a big asterisk

Doesn’t look like this is going to be the case from a PC World Interview that was just posted. https://youtu.be/2p7UxldYYZM?si=of_ixWdjqDZXrWTK

We did see a couple of 7900xtx’s with the new 12VHPWR connector, but those were from ASRock only and were blower* style cards iirc. So probably were up to the AIB’s, but at least not from the reference cards from AMD.

EDIT: spelling mistake marked w/ *

1 Like

WAT?

2 64GB ECC UDIMMS would be 1 DIMM per channel and 128GB of total RAM.

7900 XT and XTX Creator, yep. Both are blower with 12V-2x6s at the (usually) cool end. Seems like at least one of 9070 XT Steel Legend and Taichi is 12V-2x6 but the CES writeups I’ve seen are conflicting. Should be clear once ASRock posts the specs. ¯\_(ツ)_/¯

To be a bit more specific, PC World’s connector segment starts at 15:24: pull quote I’d take is “When you look at the risk reward benefit, there’s risk there. That’s been proven. And the reward is not really a big reward.”

From what I can tell most of the reward’s probably ability to hook people who care enough to assume since 12VHPWR+12V-2x6 is newer it must be better than 8-pin but don’t care enough to actually check. Seems likely a niche demographic.

4 Likes

It isn’t dangerous. And repeating it does not over and over again doesn’t change reality.

But hey, I am open for evidence that changes my mind.

Show me any source that proofs that 12VHPWR is more “dangerous” (we would have to define what dangerous means. Card broken or fire?) compared to PCI-E 8 Pin and I am happy to change my mind.

Again, no anecdotal evidence form reddit, no stupid shit like stiff cablemod PCBs, no Asus 180° turned which lead to user error, no other user error that is gone now due to the new sense pins.

True, but since AMD never had any even half decent reference cards, it is hard to compare it.
I could compare Nvidia reference with something like a Asrock Radeon RX 9070 XT Taichi.

In my books, the risk is none existent and the reward is that I get a way nicer cable to work with.
Sure, that is not a big reward, but still. It is the same reason why I switched from BeQuiet to Seasonic PSUs. I just can’t be bothered anymore with these stiff sleeved 24Pin cables anymore from BeQuiet. Compared to Seasonic cables, they are a PITA to work with.

Totally agree, nothing dangerious about 12 volts and 8 pin or 10 pin or 24 pin connector. It’s the amps needed and metal to metal of the pins touching. All of which is well tested in PC’s.

Not like we using a PC power supply to jump start a truck yet :stuck_out_tongue:

1 Like

Straight from PCI-SIG:

Dear PCI-SIG Member,
Please be advised that PCI-SIG has become aware that some implementations of the 12VHPWR connectors and assemblies have demonstrated thermal variance, which could result in safety issues under certain conditions. Although PCI-SIG specifications provide necessary information for interoperability, they do not attempt to encompass all aspects of proper design, relying on numerous industry best-known methods and standard design practices. As the PCI-SIG workgroups include many knowledgeable experts in the field of connector and system design, they will be looking at the information available about this industry issue and assisting in any resolution to whatever extent is appropriate.

As more details emerge, PCI-SIG may provide further updates. In the meantime, we recommend members work closely with their connector vendors and exercise due diligence in using high-power connections, particularly where safety concerns may exist.

Thank You,

Find me any other connector in computing history that had a “thermal issue warning” by the specifying body.

4 Likes

Yeah, no kidding. In the off and on decade and some I was doing power delivery design I never came across anything like 12VHPWR. It’s abundantly well known connectors get hot and need to be margined. It’s in the datasheets and connector manufacturers write application guides that, at times, are entirely devoted to mating. In the PC space, the practice of ensuring power pins connect before signal pins do has been adopted since, uh, at least USB 1.0 in 1996.

Molex puts the below in their Microfit connector datasheets. I’m quoting this example because Corsair talked with Nvidia about dual EPS as a triple 8-pin alternative and then implemented the shrink with the Microfit end of their type 5 12VHPWR cables for ATX 3.0 supplies.

As examples of other consumer facing power connections, EV chargers monitor connector temperature and thermal throttle. With photovoltaics it’s a thing to educate people to regularly inspect and maintain or replace connectors that are getting hot. ATX and SF(-L) supplies don’t exactly blow their exhaust over the IECs but that’s the next closest thing to 12VHPWR in a PC and it runs ~2.5x margin. For 8-pins a 2x margin’s entry level and supplies above bronze usually move that up a bit by putting more copper into the cables.

Seems to me there’s a big lie type psychological response at times as a result. PCI-SIG couldn’t fail this hard and Nvidia wouldn’t be crappy enough to stay with an inadequate margin, so 12+4 must be ok.

5 Likes

The quality of reference cards aside, the point I was making is that AMD is not forcing the use of the connector onto all of the AIB’s. It is a bit telling, imo, that there are only a couple of cards in the 7000 series and so far in the shown 9000 series that use the 12+6 pin connector.

I think there is also a difference between what is approved by a governing body for a standard and what happens in end-user settings. My anecdotal evidence is that I would rather not have to deal with a fragile connector when more robust connectors already exist.

2 Likes

I mean, we currently have 48 GB UDIMMs

I am unsure what boards will support 64 GB UDIMMs

an additional 32 GB is always nice, but struggling to think of a configuration that could use 128 GB @ 6000 MT/s as the fastest thing using UDIMMs is the EPYC 4564P

Perhaps as a small virtualization box, but the EPYC on AM5 is only available in1x PCIe 5.0 x16 or 2x PCIe 5.0 x 8 configurations from board manufacturers limiting your GPU options.

You’re then forced to buy an Intel flex card to allow GPU partitioning or at most 2 GPUs for passthrough.

As a storage appliance, I guess it makes sense as you can now support deduplication on a 20 TB NVME pool with 128GB of ECC RAM at fast speed.

But again, until there’s a board supporting dual DIMMs per memory channel at 6000 MT/s, I don’t see the big deal.

When memory manufacturers release awesome things before industry support, they tend to axe it quickly without remembering their past mistakes. Ironic since they make…

If I remember correctly, pretty much all modern consumer boards manufacturers announced throughout 2024 (link , link) the support of 256GB configurations once 64GB modules arrive. I’ve even checked my gigabyte b650m spec - same reflected there. So I guess everything is ready for these modules. In the worst case - there’s gonna be another round of bios updates. 9950x with 128GB would be a great programming station for large projects.

1 Like

That’s what I’ve seen for a while with AM5 boards, guessing it’s probably just an AGESA update AMD made some time ago. Haven’t paid much attention to LGA1700 or 1851.

For some of the numerical stuff we do using 2x64 is no problem. A lot of times I can write code to stay under 4 GB/core (64 GB total for 9950X, 7950X3D/4584PX, and 7950X/4564P) but there’s other workloads that go to 16-32 GB/core and there’s just no way around it.

For example, there’s one I’ve been running a lot lately which, after being rewritten to minimize memory consumption, uses 27 GB/core. So going from 2x48 to 2x64 should give a ~25% throughput increase because a fourth core can be brought in to fully utilize DDR5-5600 or DDR5-6000 bandwidth. I haven’t benched it but in quad DIMM three cores should max DDR5-4800ish bandwidth, so having 192 or 256 seems unlikely to help.

1 Like

But that’s not 6000 MT/s
that’s only 3600 at most right now

and 3600 MT/s 64GB UDIMMs have been around for long enough that they are cheaper per DIMM than 48GB 5600 MT/s UDIMMs


Literally zero, no? AFAIK, up until now, 64GB modules were only possible to manufacture in a Registered variant (RDIMM), not UDIMM

Edit: ok, one supposedly exists KVR64A52BD8-64, likely only via B2B channels.

2 weeks ago, there were off brand copies of the Kingston UDIMM available.
They sold out as soon as the Crucial announcement.

But I could not find QVL from any of the AM5 MoBo’s for 5200 or faster support of the 64GB UDIMMs.
Went back and realized they were 3600 MT/s, so I went with a pair of 5600 MT/s 48GB UDIMMs as I’m building an efficiency driven build and didn’t want more DIMMs drawing more power.

For reference:
48 GB 5600 MT/s UDIMMs were $180 a piece
64 GB 3600 MT/s UDIMMs were $160 a piece

EDIT:
looking at the saved listings again, it was 2x32’s and not a single 64GB like pictured

That’s 3600MHz officially supported. You can always try your luck with XMP/EXPO or manual tunning, many people have already managed 128/192GB at over 5000MHz.
I plan on grabbing 4x64GB, and if it manages to do anything past 5000MHz without much fussing then I’d be more than happy.

UDIMMs? No, they have not. 64GB UDIMMs are just being announced.

It was the supposed DIMM motherboards tested back in 2023, but it never reached retail afaik.

I could use that, as more and faster memory helps the workload, as does 5+Ghz core speeds. Having 16/32 cores all boosting over 5GHz also lets the work go better and faster. Sure, it can work at slower speeds, but the work also goes slower as well.

I haven’t seen dual dimm per channel of ECC clocking up that high

Non ECC, it’s temperamental and definitely playing the silicon lottery running 4 DIMMs overclocked. I’ve done it up to 256GB loadouts at 5000MT/s, but again that was non ECC and for a gamer setup.

On EPYC 9004 we’ve gotten 4800MT/s ECC with 384 GB loadouts, but that’s 1 DIMM per channel.

Would definitely be interested in stability of a 4 UDIMM ECC at 5000 MT/s. That’s almost 50% faster than the MoBo’s are publishing (3600 MT/s with dual ECC UDIMMs per channel).