What kind of implications does a Hybrid CPU design have for performance?

It’s a great way for Intel to confuse consumers.

They’re already being marketed as “12 cores” *



tiny prints: *(8P + 4E)

At 125W TDP

And when for an extra $10 you get a Ryzen 5900X with 12 full “P” cores and 105W TDP (even if what they consider TDP is different, the Ryzen still consumes a lot less).

But for people who aren’t technical it’s still the known brand Intel “12 cores” for them so, that’s “better”.

3 Likes

So basically you are saying this is closer to how a brain works in nature, for example - small repetitive tasks get a dedicated instruction set that handles that (like the auto-firing nerves on peoples hearts broken up into various nodes that allow the heart to operate entirely independently of your brain to keep it pumping).

But are we really at the point with basic operations in a computer that this is needed inside the processor cores themselves? Wouldn’t it be better to keep it as-is, and let those processes be handled by smaller process nodes outside the CPU on the motherboard?

Certainly there are unconscious processes we have going on in the human brain, but we are very complex systems, far more complex than a computer. Yes, yes, I know computers are catching up rapidly, but this particular step seems unnecessary for such a breakdown of the main processor cores themselves…

2 Likes

Yeah, I guess nothing. Now that I think about it. How wide the frequency scales is more relevant. I just figured we never had 4GHz on 8 cores before. I don’t pay too much attention to those boost frequencies. But if a CPU can do it, hey, that’s pretty impressive.

Wait, wait. Wha? That’s news to me… SINCE THE 90s?! All of their media was cartridges in the 90s and before. And I didn’t know until recently that the N64 was actually backwards compatible. But what are some examples of rereleasing games you already purchased? B/c wasn’t looked at the same way at that time. People didn’t necessarily want to play a game from a generation ago on their new console. But like I said, the N64 could. And Sony can ‘eat shit and die’. Such cancerous company culture. Software, hardware. Whatever is needed. They have billions. Invest the money and time. Invest in some talented programmers and the end result is: Reprinting of PS1 and PS2 games. Happy Time.

Yeah… kind of like what someone pointed out before; I always forget about the fact that CPUs don’t simply WORK at a constant operating frequency. So even the BASE is not the idle frequency? It’s essentially completely dynamic. If this is the case… then… ? I don’t know. big.LITTLE is looking kinda dumb?

It makes sense where a tremendous amount of parallelism is required. But isn’t what what multi-threading is for? Wasn’t AMD experimenting with creating a 5-threaded CPU core design? Wouldn’t more than 2 threads per core be the ‘next big thing’? I can’t remember anymore, but didn’t the CRAY supercomputer do something crazy?

1 Like

i don’t think you realise how much more efficient the efficiency cores are for background or low intensity tasks.

my macbook pro 14 will run background tasks in less than a watt of cpu power.

2 Likes

Heh. I don’t think you understand how little I care about power usage. Up until recently I had a 460 V8 with a big 4 barrel holly carb on it. THAT sucked the gas. - and was a hell of a lot of fun I might add :wink: In many ways i’ll miss the sound of monster gasoline engines… Electric just has so much torque it’s hard to ignore.

I’m kinda the same with my pc, I don’t care how much power it sucks, as long as i’m hooked up to an outlet it doesn’t matter unless it risks burning the house down, and then i’d just run a dedicated power line from the main hookup. (yes I can run my own electric lines, my dad was an electrician and I re-wired a couple houses with him. He was a bit of a dick, but he taught me a lot… but I digress.)

So I must re-iterate, if it’s only about power draw, I don’t care unless i’m stuck on battery power only. So if that’s the case, then keep those silly ‘e’ cores for the mobile stuff only.

  • if it’s about the complexity of pc’s requiring to divide tasks even further for better speed. (as I alluded in my previous post). Then i’m all for it. But if it’s only about power draw? hah! sling that power draw at me. I can take that easily. Just give me more speed baby!

  • a link for fun
    Ford 460 V8 Engine Specs, Firing Order and Information : Engine Facts.com
    I had a lot of fun with that engine. I tuned mine myself, you don’t want to know how quickly it would pull gas, on a 30 gallon tank, I could watch the needle visibly drop when I stomped on the accelerator. And the sound it made just sucking air in when it was opened up… Was it wasteful? Yes. Did it make conservationists cry? I hope so. It was a monster, and I loved it.

1 Like

Power usage = heat essentially, and whilst you might not care, if it means the vendor can fit 128 cores in 100 watts, you might.

Power efficiency means you can push more cores in a socket, or the same cores at better clocks, it isn’t just for mobile devices.

ps
You’re talking to someone who daily drives a 6L LS variant… and has a 2 stroke motorcycle in the shed that burns MORE fuel than that.

2 Likes

Heh, Nice!~

But back to topic, I agree heat can be a problem, but do you think we are near the point where our existing thermal solutions can’t handle the output?

1 Like

Alder lake says yes

2 Likes

Alderlake is still shipping with the basic copper slug in aluminum design. Intel has yet to even to include simplest heat-pipe technology stock! Don’t tell me that they are up against a thermal wall ;p

1 Like

Well huh, that’s the best argument I’ve seen for them so far.

2 Likes

The danger I read in that is the very same as Risc to Cisc.

We had RISC and then we started yo.think what if we just made a small section just for the repitive task, and another and another and soon Cisc.

So in the future I can see Intel “combining” (I so want to say glue) them together to one advanced fast core and repeat years later when they need yet another band aid on the monstrosity the created and now no longer can control.

2 Likes

Put it this way:


Screenshots of activity monitor showing running applications, and power consumption of the SOC whilst doing so.

Yes, those are milliwatts. 14" MacBook Pro M1-Pro.

Also shows why Apple are so keen on LPDDR instead of regular DRAM. The DRAM is still consuming 2x as much power as the CPUs :smiley:

1 Like

I mean you see the same behavior in supercomputers right? This is just uber scaled down.

1 Like

Hmmm okay, well the thing is this was about the Intel implementation. I was kind of thinking there were no macbooks with 12th gen so was confused.

Yeah efficiency can be very good but comparing an entirely bespoke and totally different architecture to each other is not really fair.

I don’t know, but j would wager Intel “efficient” cores are pulling considerably more than a watt.

1 Like

Thread title is about hybrid CPU design - just because intel sucks at it right now doesn’t mean its a bad idea.

Alder lake is a bodge. The fact they have two different architectures with different instruction sets cobbled together is a testament to this and causes all sorts of scheduling problems. The “work around” intel bodged into it was to just disable instructions on the P cores to make them as dumb as the E cores they are using. After pushing said instructions as magic CPU dust for the past few years.

Expect future variants to be less insane (more like the Apple M series) and have a consistent instruction set to make scheduling a lot easier and less performance trade-off.

1 Like

That’s all fine, I am justbsaying from.the start of the topic while tital unspecific, the conversation within was solely about Intel’s approach

Literally the first line of all of this.

And while I accept Apple are doing interesting stuff, I think some people took your claims as efficiency as applying to Intel.

That makes no mention of M1 as opposed to 12th Gen. When you follow on to say that you are referencing a 14" Macbook Pro (which previously had Intel chips).

Which would further imply you are mixing M1 and 12th Gen interchangeably. But there is no alderlake Macbook Pros.

This is not a fair comparison at all and deliberately missing information in an effort to mislead.

I do t have a problem with the actual figures just that as it suits the conversation you seem to be applying the m1s efficiency to 12th gen which is not at all reasonable.

1 Like

Intel’s efficiency cores are also a lot more efficient than their P cores, but I do not have hard metrics for them (whereas I can illustrate the concept by pulling data out of the M1 I’m using right now, live), and as above the current implementation for them is v1.0. So expect even intel to get better than today. Conceptually both intel and apple and others are doing the same thing with varying degrees of success. But alder lake right now is clearly half baked - don’t rely on that as a judgement of the concept itself.

Modern OS do a lot of crap in the background, and spinning up full fat performance cores for that is a waste of power/heat/etc. Having a bunch of small cores that consume minimum heat/die space/power is a win, even if you don’t care about power as you can fit more dedicated performance cores on the machine that won’t get interrupted by background tasks.

1 Like

Be careful about throwing accusations around mate; I’m not attempting to mislead at all, and I’m not really comparing for the sake of comparison. I.e., it isn’t fair or unfair - it is what it is.

I’m using the M1 as an example of what intel could/should be doing when they do something more than a half-assed implementation of the concept, in order to refute the notion that the entire CONCEPT is useless (as some here seem to think).

Mobile have been doing it for years, apple has been doing it for years (2 years now in computers) with success. The concept has merit, intel and microsoft just need to actually execute properly.

2 Likes

I’m not. I think it is foolish and, if these were more critical things, dangerous to take one item and then extrapolate its potential by basing it on information form something else.

While it can and will get better on all fronts, right now its pretty terrible and Intel have proven many times over that they poses a capacity to take a potentially good idea and so thoroughly run it into the ground.

So while it is a thing Intel are on, and apple have a similar idea, the implementations are so vastly different they they cannot be compared even to say that one is bound to get better just because the other is now.

So by any means, in the way it was presented, it is unfair to compare, extrapolate, inferred or base anything Intel could do on what Apple are doing now.

Just to make it obvious how I see it. You cannot say HDDs will get better now that we have SSDs, one will simply not drag the other along, they are in completely focuses while working in the same area.

1 Like

I disagree with that… and this is a bit of an aside…

I think intel are going to need to figure it out or they will become irrelevant. They either sort this out, or they’re in deep shit. That goes for AMD too.

In addition to fitting more cores in less space to help with background stuff… the future of computing is mobile/wearable/low power.

If they don’t have anything in that space, they’re done, they’re a dead company walking, just like SPARC/MIPS/Alpha/etc. were killed by the PC before them.

People/OEMs simply aren’t going to keep supporting some busted old x86 platform if they’ve migrated everything else to ARM (or RISC-V) based processors that run in the mobile space.

1 Like