AdoredTV - Low resolution benchmarks are worthless

8 cores for compiling. 4K GPU for the big 4K screen. 720p gaming, cause the GPU you got sucks and cant handle gaming at 4K.

That's exactly the argument. Now that those 720p benchmark are not showing that the CPU will be outdated in 2 to 4 years the 1440p and 4K argument suddenly becomes relevant.

I mean it wont performe worse than the 7700K when you put in the brand new GTX 1180 ti or what ever comes out down the line.

because I don't want the game to ever dip below 150 fps

they also got lost of productivity tasks.

if you're buying an 8 core chip you better be doing something other than playing minecraft.....

?!

i'm not saying they are only trying to appeal to gamers, but gamers is definitely a segment of the market they want the R7 to appeal to.

4K is becoming the modern standard that everyone is moving to, I don't see why it's a problem to want to show your brand new uarch can support the modern standard.

Averaged out across different reviews you can clearly see that Ryzen has higher IPC than Broadwell-E. In some software environments it's matching Skylake clock-for-clock as well.

In the words of James Prior; "An 8-core desktop CPU should not cost $1000, and a 4-core at $340 is insane. Ryzen aims to disrupt Intel's quad-core monopoly and introduce more options for the prosumer with higher value at lower prices. We plan to make the $300 8-core the standard from this point forward."

So what that says is Ryzen is positioned at the desktop price point, but built to compete directly in the prosumer market which is dominated by X99.

Then you want an i5-7600, not Ryzen. There, easy choice.

If you got one of them 4K UHD 40 - 50 inch TV's as a monitor and got a R7 250, how can you get a good FPS?

Any way my OP was sarcastic. And so is this one. :p

Well if its realworld or not isnt really relevant here.
What is relevent in this case, is the measurement of cpu performance compairissons in gaming.
The only way to do that in a proper way is to create a cpu bottleneck.
Which mean using a highend gpu on a lowres test.
Of course you can argue that it doesnt really mean much wenn it comes to realworld scenario´s.
But that isnt really the whole point of it.
The point of it is to measure and compair raw cpu performance in gaming.
Which turns out that Ryzen is falling behind.
But that really isnt any suprising what so ever, because its new.
And its main problem right now is clockspeeds.

Thats why in my opinion the R5 series would be way more interesting for gamers.
If AMD manages to get them clocked higher out of the box.
IPC isnt really the problem here, because the ipc is pretty close to Broadwell-E, or sometimes slightly better depending on which reviewer you watch.
The problem basiclly is its overclocking potential.
But again in realworld scenario´s with highend cards like a GTX1080 / TitanXP etc, are 1440P and 4K cards.
But on 4K pretty much any true quadcore cpu from the last decade does put up a good showing basiclly.
I mean how much sense does a 1800X trully make for gamers?
i would say nothing.
THe 1700 and 1700X are a bit more debatable.
However a 1600 / 1600X which higher clockspeeds would be way more appealing.
And that is basiclly what AMD needs to fix.
THey need a 1600X with 4.2GHz base.

Got a R7 250 here, also an R5 230 2GB. I wonder if

Are you going to hook it up to a UHD TV?

Might do that.
Just spec wise, the R5 will do f all because it does not support it.

While this is true, you're missing some of the main issue here. Namely, this test is cherry picked to favor Intel CPU's and always has been. Intel CPU's have always had better performance on this sort of thing because of larger cache and better architecture optimization in software.

Show me statistics of a real-world application where you get similar results and I'll start believing this. The fact of the matter is that no one will buy a GPU that's capable of 70fps at 2160p and decide "oh, I think I'll play at 720p today." More importantly, a human is physically incapable of noticing the difference between 350fps and 550fps without computers, so I'm not sure that it even matters.

On top of this, we're talking about R7 cpu's fighting x99, not z170/z270/z97. We need to understand this. x99 and R7 cpu's are designed to be good for different workloads. This time last year, the 5820k was only recommended for people who are doing multithreaded applications because games didn't take advantage of it. Why is that? Workload optimization.

AMD's R7 CPU is closer, in design, to a dual socket 4 core than a single socket 8 core, in the way that memory and cache works. What this means is that because the windows support for this CPU is still reading at a 3rd grade level, the windows scheduler moves tasks across the compute packages on the system, forcing the system to use RAM, rather than cache.

Now, if Microsoft would treat the CPU as a dual-socket, I suspect we would see significantly less of a problem. That said, it's unlikely that Microsoft is going to do this in the near future.

Have we ever seen AMD's lower core skus with higher clock speeds than their highest core count? That's an intel thing. AMD typically builds one die, disables cores with defects and ships them as "6 core" or whatever. Not saying it's impossible, just that it's unlikely.

At the end of the day, you're going to be happy with Ryzen, from what I've seen. It's a good architecture, will get software and scheduler optimizations and is absolutely capable of playing games.

Why are we making such a huge deal over this? It's rare that we ever have a clear winner, and this generation CPU doesn't change that.

I'd wait a few weeks and check for more benchmarks. A new architecture rarely launches without problems. Remember the first gen i7?

Did you repeat that into your mind? Cause it does not make any sense. As you said i criticized the conclusion as premature because the technology is new. I did not make a new conclusion on the technology.

More importantly, a human is physically incapable of noticing the difference between 350fps and 550fps

More importantly current CPUs are capable of theoretically executing not 550 but 4,000,000,000 (multiplied by number of execution units) instructions per second.

By going further with your logic we never be able determine if ENIAC (that ended its operations in 1955) with its 5,000 operations per seconds is faster of slower than 4,000,000,000 IPS per core/thread in current CPUs, only based on fact that as humans we are unable to register changes faster than ~60 Hz.

I did not make a new conclusion on the technology.

Then, I'm sorry, I read statement "the high-end ryzen are not good gaming CPUs is a very short-sighted conclusion" as equal to "high-end ryzen are good for gaming".

I get a bit frustrated when people focus too much on inaccurate benchmarks when the ones that more closely reflect regular use cases for these CPU (compiling, cinebench, 1440p/4k benchmarks) are neck and neck.

That said, I'm very patient and am not one to get upset about a 5 to 10 percent difference in performance. The only reason I'm thinking about getting a R7 cpu is because I need 8 cores and don't want to spend 800 on just a CPU.

Recently, it seems like everything (not just in tech) gets blown out of proportion and is dramatized to hell and back. I'm just tired of the drama.

2 Likes

They are. Or has anyone seen a game running poorly on them? ... I didn't.

1 Like

/thread

I don't know why people don't just buy the CPU, download game, play game and decide if it works fine. These benchmarks are figuratively reviewers measuring AMD and Intel's dicks.

4 Likes

Recently, it seems like everything (not just in tech) gets blown out of proportion and is dramatized to hell and back. I'm just tired of the drama.

Very true. It is information area at its best. You do not need to write, then print a book to share your opinions and wait years for readers to get your self to be heard (not to mention science degree). You get feedback, good and bad, wrong or correct, right away.