I'm sorry... How many GHz? Ryzen 7000

I find the thermal boosting algorithms of Zen4 to be quite interesting, but also of note was the core-to-core delta which could indicate a need for some refinement with their new IHS.

In any case, Zen4 is looking to be a tweakers paradise but I think you’re right in that novice DIYers probably won’t be able to extract the full performance or efficiency out of their chips (much like Bulldozer). I wonder if 420mm AIOs are going to gain marketshare going forward, that would be my choice for a Zen4 R9.

I think part of AMD’s challenge is to educate Zen4 users that TjMax is fine (which actually is fine). The goal of cooling the CPU is to prevent thermal throttle, and less so about reducing temperature to way below TjMax. Because the latter is a greater challenge than keeping at TjMax and minis zing thermal throttle.

Many kinds of AMD’s patterns will be very happy with AM5/Zen4 launch. Motherboard vendors are one. CPU coolers are another one. I’m prepared to see exotic cooling solutions (thought personally I don’t think it’s absolutely necessary)…

Yes, it can be captured and turned back into electricity. In theory anyways. Would have to integrate a Seebeck generator into the package:

isn’t bloatware a big problem with OEM machines?

Maybe it’s because my only desktop PC has been a del X8930 but there was a bunch of stuff I didn’t put that is on the machine that I’d rather not have. Also adding additional memory and a hard drive was a bit annoying because of the shape of everything. Though I will say I kind of prefer it’s case to most of the cases I see YouTubers show off in computer builds, I really don’t understand why the glass thing is so omnipresent it seems.

The main reason I’ve been thinking of trying to build my own PC is my current one can’t do some virtualization stuff I’d like to learn the skill and I don’t want a bunch of bloatware. Also Ideally I’d like to partially be running Linux so paying for a windows license seems like a waste.

Linux with ACS patch can solve most virtualisation issue if the platform is the culprit. Some distributions offer ready-made binary kernel with ACS patched. Or you in a future time you can learn to compile your own.

Consumer OEM systems plagued with bloatware perhaps are a valid concern. I haven’t bought one for a very very long time. I think it could be resolved in multiple ways such as uninstalling the bloatware yourself, running Linux instead of Windows, picking business line systems instead of ultra value oriented consumer craps and etc. So much so that I no longer perceive bloatware an issue.

OEM systems have its value in tight & neat system integration. Casing and chassis design as you mentioned is the one I prefer OEM designs too. For example, check out HP Z workstations, Lenovo Thinkstation, or the equivalent from Dell. The air ducts, well hidden cables, every inch of internal space used to its fullest are things I appreciate a lot that you can hardly replicate with DIY PCs. Not that it’s impossible with DIY PCs. But surely not buying parts off Amazon and finishing your build in a couple of days.

The problem I see with OEM systems that OEMs are penny pinching every feature and rationalize away every modularity of PC systems. Take the current Lenovo m70 for example. You cannot fit 2x 3.5 inch HDDs in the tower version, although there is plenty of space. We used the predecessor as cheap local backup NAS for small office and home office clients. It would cost them like 2 cents to add another „U-holder“ beneath the drive cage arm. The SATA power is supplied through the mainboard with a non-standard connector and indeed there is a second plug, but what’s not included - a second power cable. You can only get it as replacement part from Lenovo and it’s never available.

I have a shuttle Mini PC from 20 years ago, which can fit 4 3.5 inch drives (some models can take even more if you really get creative and you don’t care about the hdds longevity).

The same goes for Wireless. The P350 Tiny for example. If you order a version without wireless connectivity and want to use a miniPCIe WLAN card you have to order 4 separate replacement parts, take the thing apart, route antennas, replace parts of the cassis. It cost less for the customer to reorder the Wi-Fi version.
In the past you plopped in the missing Wi-Fi card and were done.

The P350 tower cannot be really called a workstation. A full length GPU barley fits. With a HP Z240 I could throw in every consumer card from that era without much worry.

1 Like

I’ll be the voice of dissent here, because I disagree with this one. Running silicon of this class that hot may be acceptable by the manufacturer but it’s not ‘fine’. The rule of thumb is that for every 10°C increase in temperature you cut the life of a semiconductor in half. It’s an exponential decay. Yeah, TrenchMOS power MOSFETs built on old lithography are going to last for decades even at 200°C, but a 5nm processor isn’t the same class of hardware as a power device and is going to have a massively reduced lifespan as the tiny metal features diffuse faster and tiny p- and n-dopant wells degrade.

We’ve returned to - and far exceeded - the days of Intel’s HeatBurst NetBurst architecture where they just dumped as much power into the core as it could handle to squeeze more clock speed out of it. This is not a good thing.

5 Likes

Inbefore we all go to 45W mobile chips for our desktops.
I am dissapointed in AMD throwing power out the window, but anyaway GN showed they consume less than the 12900K, so there is that.
IMO all except the ultra high-end should be deisgned for peak efficiency. An R5 7600X shouldn’t hit 95deg, ever.

Check out this video regarding ECO mode (around 22 minutes) Ryzen 9 7950X: Power Consumption & ECO Mode Tests - YouTube. It’s surprising how efficient zen4 is once you limit the wattage.

Huh, hadn’t seen that yet. Now that’s more reasonable.

Zen4 as an architecture was always designed to be super efficient in mobile and server chips, but they had to throw that out the wndow to be competetive with desktop Alder Lake and Raptor Lake, which the latter is likely going to take back the performance crown in a couple months anyways.

Also don’t forget that there are substantial rumors that Zen4 will be used for little cores in the upcoming Zen5 big.LITTLE architecture.

Very tempted as I’m still on 2700X but got way too many outgoings at the mo.

I think anyone that did a little overclocking with the last few generations of CPUs (both Intel and AMD) came to know that there is a hyperbolic curve from power to performance (past the power/performance “knee” in the graph ever higher power input will yield increasingly less performance improvement).

It seems that AMD is releasing the 79xxX CPUs by default just sufficiently beyond their performance knee to beat the current generation (12th gen) Intel CPUs in absolute performance.

That means in turn that underclocking 79xx CPUs will offer nice performance (comparable to 59xx CPUs) at much lower power consumption.

This is quite an attractive offering for people looking for power efficiency over raw performance.

I would be inclined to believe what you said as its well written but my thinking is counter to that.

These new CPUs are 6nm, i thought five but wendell said six after talking to the engineers.

That only slightly less than the 7nm currently used. And there are people out there running them hard. I have not yet heard of mass amounts of them dying due to heat.

There is more in the new ones for sure but i dont think it will meaningfully impact life.

I would also imagine they have been testing these for years now and likely know the expected life span and i imagine that still lies outside the typical refresh cycle.

Go to teach AMD, Intel, Apple and TSMC on how to engineer long lasting chips if you think you know better than them.

Not at all. However, the Moore’s Law has kinda stopped working. So people will have to come up with new ways to design faster chips. The coming few years should still be fine though.

I share your sentiment from anecdotes. I would just say there are so many OEM designs, you still have to pick the one right for you.

For example, in the old days when people want a good laptop, I simply pointed them to MacBooks and install Windows, Linux yourself if you need them. Sadly that can easily work out after Apple moving away from x86_64.

Modern CPUs are much better engineered and monitored when compared to 20 years ago. If I recall correctly, Zen CPUs are changing voltage/frequency every 1ms, which is beyond monitor software running on OS. The goal is to ensure reliable and long lasting operation. The definition of TjMax is safety and long lasting operation. So it’s fine at TjMax. Vendors like AMD & TSMC will find the proper value of TjMax for safety and long lasting operations.

1 Like

Alright, I’ll bite. Disagreeing with the design choices of something doesn’t mean you ‘know better’ than someone, it’s called criticism.

If it’s a reputable manufacturer, the definition of TJ is set by the manufacturer such that a given piece of silicon can operate at that temperature for a certain amount of time. As an example in the TI document above, they use 105­°C for I grade devices with an expected MTBF of 10 years. If they only had a requirement of 5 years MTBF, that TJ might be 125°C instead. Yes this is often something characterized instead of specified which is why a lot of manufacturers like FTDI only offer industrial/military temperature grades, because that’s what their particular setup is capable of and there’s no point in downselling.

I don’t like that the latest generations of processors are running silicon this hot - out of the box, yes you can underclock them to where I feel they should have been running in the first place - to squeeze out every bit of performance at the expense of lifespan and would rather see them clocked slightly lower and still functioning 10-15 years from now.

I know little about semiconductor process node but I know enough to tell you’re talking non-sense in the context of modern/TSMC process nodes. That’s why I was asking you to debate with AMD/Intel/Apple/TSMC - all are hard-core users of TSMC process nodes. Not here to mislead the laymen.

Apparently you know something about semiconductors or with an EE background. Quoting that TI paper is nothing really…I’ve seen multiple online posters quoted that paper multiple times over the past few years. People really should first question themselves weather what the paper said is directly applicable to nanometer process nodes and modern CPU designs. As I said in one of my previous replies, for example, Zen CPUs have tonnes of sensors and monitoring software running inside the Zen processors with a resolution of 1ms intervals in which voltage/frequency changes and adapts to ensure reliable operation and long lasting lifespan.

By assuming TSMC & its customers not taking deep & good consideration of reliability and reasonable life-spans of their silicon, it’s naive, arrogant and ignorant to say the least.

The same laws of physics apply to silicon today as they have for the last two decades. If you think someone has found a way around that, then the biggest providers of mission-critical semiconductors like TI and Analog Devices would love to hear about it so they can correct their research labs. Otherwise I’ll continue to trust them.


For like the third time, I never said that these chips are going to fail immediately. All I said was that I would rather see them running at lower stock frequencies (dynamic power rises as roughly the square of clock speed) so they run cooler and might still be alive three decades from now instead of five or ten years.

2 Likes