Has intel removed Multithreading before?

he talking about the GPU side of things.

Oh sorry.

Wait when did we switch topics now?

we didnt its just a sub part of the nvidia explanation on how they do things. still part of the over all topic just a sub section.

1 Like

answer is in the next 5 words or so

Yup, wow reading it now I don;t know how I missed that.


Back on topic to Intel, @jerm1027 you hit on something I was going to say. I could be wrong but some times the i5 CPUs are i7s with hyperthreading turned off no? Like the 7700 vs the 7600, i know there are clock and cache differences but one is 4C/8T and the other is 4C/4T. Does this count?

I am just thinking that while the 9700 is 8C/8T compared to the 8700 which was 6C/12T, previously the line up from the 2700 to the 7700 was all 4C/8T, with the i5s of each generation appearing to be the i7s with HyperThreading turned off, indeed the 8700 to 8600 followed the same thing. Is there an i9 with 8C/16T? In which case the i7 to i5 of previous generations would become the i9 to i7 going forward… If any of that makes any sense.

yep, that’s exactly right more or less.

It is important to note that HT doesn’t magically double performance in multithreaded workloads though. Nets you 1.1-1.5x depending on what the task is. Can even degrade performance in latency intensive tasks like realtime audio DSP.

8 cores HT off is better than 4 cores HT on all other variables controlled for.

Same reason they could sell i5’s at a premium to i3’s despite the same thread count.

1 Like

Oh yes I do understand the not doubling of performance, I was taking that as assumed though really I should not. There is a trade off in time for the other thread on the same core.

I see the new i7’s being godsends for audio guys if they stick with this marketing shift. For the longest time they were stuck on 4 thread machines because HT/SMT play hell with high load VSTs and audio resampling/recording.

1 Like

Yes, it would appear that way, ergo my comment on the lineup being too segmented. I honestly don’t get the point of the Core i3 anymore.

image

1 Like

I think the i3 stuff is more a pricing issue than a validity issue at this point.

All the 8 thread + laptops i’ve owned were awful from a useability standpoint, so there’s definitely still applications.

I think if they released an “i6” or “i4” with 6/12 or 4/8 topology, that would be the line where we’d all have to say “It’s a bit much”

Is it a pricing issue though? That is one point, but performance-wise, how does it really differentiate from a Pentium gold? The i3 is supposed to fill the gaping hole between the Pentium Gold and Core i5, and it does so with the price, but not really with performance. It needs hyper-threading, but then it would cannibalize i5 sales, which, in turn, cannibalize i7 sales, etc. I also don’t under the celeron.
Anyway, back to the original point, Intel’s product line is too segmented, and I’m going to give up on this before I give myself an aneurysm.

see above. 4 cores is better than 4 threads with HT.

1:1 scaling at best vs. a tradeoff that isn’t suitable for all workloads.

also cache does a lot more for performance than people realize if we’re specifically talking celeron/pentium vs i3

when you’re getting that close to the bottom threshold for “baseline functional,” small improvements mean a lot more.

guess I should also mention a few other marketing misconceptions that seem to be propogated commonly while we’re at it:

TDP has NOTHING to do with the amount of power your cpu will actually draw, it’s a metric that’s supposed to help you pick a heatsink

HT/SMT/CMT etc will never be a 1:1 performance increase and all have up and downsides

F sku intel products still have h.265 and other decoding asics active despite not having the display parts of the “gpu” part of the die

More cache IS often better within the same product line, and will have a huge impact in the low and midrange for general useability

More cache isnt necessarily better across all products, varies from architecture to architecture and isn’t directly comparable except in the same product line

Basically everything is an SoC these days, the term became meaningless After FX and haswell (maybe sandy bridge?) in the mainstream processor market

It’s still incredibly difficult to saturate a pcie 3.0 x16 port for consumer applications, you won’t bottleneck your gpu for gaming or rendering by putting it on an x8 port

the benefits of quad channel memory are essentially meaningless to the average consumer

Frequency does not equate 1:1 to performance in ram except with all other variables controlled for (which they frequently aren’t) Same with cpus across different product lines.

B die RAM isn’t magic, it’s just the only nands AMD bothered to test for and provide sane defaults to the OEMS with

Memory profiles vary wildly from mobo to mobo and from cpu to cpu if your memory controller is integrated

Your mobo OEM probably lied to you about your vrm, GPU too.

quicksync probably wont matter to you unless you work in video production

optane is just an ssd with different storage chips, it isn’t magic, just a different way to flip bits in a nonvolatile way. This storage tech is called crosspoint. Think of it like advil vs. ibuprofen.

GPU memory bandwidth will increase performance to a point, but doesn’t magically make the device perform better once you reach the point of “enough”

HDR is totally meaningless in the consumer market, you probably shouldn’t bother with it.

Overclocking doesn’t necessarily reduce the lifetime of your device in any meaningful way, it’s increased voltage that increases electron migration in most cases and even then you probably won’t live to see the chip die before it outlives its usefulness unless you push ln2 OC settings in a daily setup.

Mining does not reduce the lifespan of GPUs, and is less stressful than variable workloads on the memory/asic. Mining does kill fans faster, but that’s a 5-20 dollar replacement part.

3 Likes