Is intel cheating us?

why exactly is a core i7 870 getting a single core score of 1.2 in cinebench, and a core i7 6700k only 2.18? and the clocks for the i7 870 are lower than the 6700k as well. what the hell? its 45nm vs 14nm, that's more than 3 times the transistor density, unless that's not how it works which if it isn't please tell me. the 870 is also 3 architectures behind the 6700k so again what the hell? are they shrinking the die, but keeping the transistor count the same, and therefore the same number of alu's, cu's encoders etc the same? i mean its 3 architectures ahead, has a higher clock rate of .6ghz, 3 times the transistor density, yet its score is only 45 percent more. intel is definitely fucking us over, because i think it should be at least 3.2 times as fast, and that's a conservative 6.6 percent increase in performance with the architecture.

Its not how it works ~ Score I mean.

improvements are in power efficiency.

Sandy ~ AVX instruction set, and some nice performance improvement
Ivy ~ power efficiency, 3d transistors (bullcrap, sometimes even slower than sandy)
Has ~ power efficiency (haswell were originally mobile technology - enuff said)
Sky ~ ddr4 support

Intel doesn't have to do anything special to stay on top... amd is not competing...

thats all architecture stuff, im talking about mainly from a transistor count position.

score is not counted like that.

There isn't a reason for intel to push forward in huge 30% performance jumps when there is no competition forcing them to do so, thats why we need AMD to rebound in some way, shape, or form...

can you specify what you mean by that?

if you want to measure the performance difference, do it by flops ~ not score in software.

Score like 1.2 v 2.18 means nothing. If we look at even older cpu which was much slower than i7 870 you won't get 0.2 you will most likely get 1.01 even though its 2x slower etc...

yeah but its cinebench, and its the SAME version? also the gflops for the 870 is like 55, and for a i7 4790k like 95 glops, i cant find the 6700k glops but im sure its gonna be just like maybe 15 glops above it, so my point still stands.

6700k should have about ~170 gflops.
my i7 3770k @ 4.2GHz has ~125-139gflops

Source: https://www.pugetsystems.com/labs/articles/Haswell-vs-Skylake-S-i7-4790K-vs-i7-6700K-641/

right on the spot :)

still ddr4 gives something in benchmarks i see there.

I'll leave these here, as well.

http://www.anandtech.com/show/7003/the-haswell-review-intel-core-i74770k-i54560k-tested/6

I think the only thing intel is screwing people over on is the TIM in the skylake chips.

article is kinda flawed ~ but transistors do mean. Not in a meaning of real performance but potential performance. Today they minimize wattage, they need to keep those cpu's at comparable, preferably better performance ~ but thats a 2nd energy efficiency is more important ~ as they get tax write offs.

and thinking more, everything is potential performance :)

performance is depending on application, system, driver, arch cpu engine, die design/layout, count of transistors. ( transistors are the denominator of raw actual performance of chip) but it doesn't mean it will have better performance than something with less transistors but at higher clock or different arch engine.

Middle ground is flops, and synthetic benchmarks.

good example of this is Fury X v Titan X.

Transistor count
Titan X = 8000
Fury X = 8900

gflops
Titan X = 6144
Fury X = 8602

While Titan X performs better :) because it is better in application layers. Though in raw performance Fury X should beat it, and by a lot.

1 Like

One shouldn't look at scores in such a way. Comparing chips is a lot harder and not as black/white as people tend to try and make it.

One of the main features in new architectures in FLOPS/W. Over a year or two of 24/7 opperation, the net results of having a lower wattage can outweigh the benefit of a chip that is faster. In large warehouses, every watt of heat is not just a watt that ends up being on your power bill (yes I know you pay for energy not work, but you know what I mean) but is also heat you need to remove from your system, requires you to use beefier power supplies (which take up space that you could use to put more cores in), bigger coolers (again more space wasted), and heat you need to cool at some point (costing you more energy again). This can be seen in many server which have a profile that will do things to the CPU that will drop it's maximum performance (so you are not using the chip to it's "full" potential) but will use the chip at it's best performance/watt point.
Not to mention that in most situations, total performance far outweighs single core performance - both in consumer and enterprise situations. Unless you are using bad software, you generally can't notice the difference in single core performance anyways - my web-browsers and editors and so on are faster at doing things than I can give them things to do. And any self-respecting software that really can benefit from performance gains is multi-threaded anyways.

I would like to add that in my opinion, the Fury x vs Titan X is not a good comparison to the CPU side of things. GPU's performance, it is quite similar to mobile performance: there is a lot more to how responsive and fast the GPU is than it's chip - the software stacks allow for a lot of optimization also - nvidia decided to put more money in optimizing the software stack and save on cost in production, while AMD decided to do it the other way round. This is as much the case with the CPUs.

THere are more benchmark program´s then just cinebench.
cinebench doesnt realy mean that much to be honnest.