AdoredTV - Low resolution benchmarks are worthless

Yes, there is still some work to be on Ryzen. Maybe ? The 6 core will prove interesting from a pure gamer perspective.

Ryzen is fresh and new, it still needs time to mature and get optimizations.
However i dont see the way that current games are scaling over a multitude of threads improving that quickly.
Most games still benefit from higher core clocks on a few cores,
rather then having a multitude of slower ones.
Even DX12 till now doesnt really show that much of an improvement atall.
So from that perspective, for mainlly gaming Kabylake is still the better choice in most cases imo.
This might change in the near future, who knows?
But Ryzen is massive improvements over Bulldozer, so its great to see them back in the market with some competition.
Its just hoping that for the R5 line up, they could get better overclocks.
But thats all yet to be seen of course.
The ipc on Ryzen is basiclly there, its main issue is just effective clockspeeds atm.
AMD of course know this, and thats basiclly the reason why they tell reviewers to test at highres.
Because at highres, their numbers would of course look great on paper.
But that doesnt change the fact, that its kinda missleading of course.

So testing at a lower realization AKA the one used by most people and is used in order to remove GPU restrictions i is worthless?

I was honestly interested in what the guy in the video had to say until the half way mark when he started talking nonsense about next gen consoles and valcan like it was fact when we dont know anything about it.

1 Like

Yeah, ... uhm, exactly.

If I am understanding this correctly people assume that setting a game at low settings and resolution creates a "equal" CPU bottleneck.

This leads me to quite a few questions.

  1. AMD has talked about in the past and present about "optimizations" for Intel. It was my understanding that a CPU is a CPU is a CPU if they're all the same 64bit arch. Who comes out correct in this end?

  2. Does it actually create a real bottleneck? In concept I get the general idea, but I would assume pushing game engines into "odd" situations likely causes "unpredictable" outcomes, of which could amplify any potential "CPU manufacturing" optimization or quark issues.

  3. This is in a sense doing synthetic benchmarks using a game engine to see "raw" CPU performance, yet its through so many different mediums that potentially pollute the results. Is there a issue with the game engine? Maybe something specific with optimization? What about the DX version and how well it plays with each CPU?

Why in the world are people trying to infer performance of a CPU from a game engine that wouldn't be comparable between games? There is the simple concept of CPU synthetic benchmarks that solve "some" of these problems already, and at least they're "designed" to be a comparison. These wonky settings on engines being pushed in way's they weren't originally designed for seems the worst "approach" for any kind of measuring stick.

Or maybe I'm just an idiot.

optimizations

Most current example would be operating system scheduler.
As for the x86_64 architecture. On some level we can say that there are no longer any CPU that is made in that architecture as all of them translate those instructions into different set of smaller operations inside the cores (CISC outside, RISC inside).

benchmarks

Neither synthetic nor that of low resolution game are definitive.

Synthetic, always benchmark specific algorithm (or set of it). Usually they are criticized as completely not having anything to do with real life scenario.

On the other hand those "real life scenarios" are dependent on so many different components that might actually tell nothing about the specific one (e.g. CPU). Those "real life scenarios" also are very dependent on current software (OS, frameworks, libraries, etc.). Thus they are also criticized as not being accurate due to the fact that they measure a system as a whole and usually you need to interpret the result and confront them with the initial parameters of the system under test and parameters of the test its self. .

Basically there is no silver bullet to measure it all in one.
And I appreciate reviewers for trying different approaches.

1 Like

Not if your monitor can only display 144 frames per second. Show me a monitor capable of displaying that framerate at 1080p and I'll recant my statement.

Until that's possible, it's imperceptible to humans because there is a bottleneck preventing all the frames from even getting to the physical world.

I play at 1080p, I've got a 6700k. I also have a R7 1700 in the mail. I play at 1080p because I only have a Fury and 1080p scales perfectly onto my 4k monitor.

I don't only play games though. I know this thread is mostly about gaming, but I run VMs and do a lot of compiling. (maintain certain packages for a distro)

This x10000. I've been trying to beat it into peoples heads recently that it doesn't matter what numbers you're getting as long as it makes you happy.

How dare you? You must spend 1000$ for 10-20% better framerates, that will disappear with a couple updates in Windows and games and stuff...

5 Likes

Honestly, I spend a lot of money on PC hardware. But it's more around peripherals and interaction devices than compute power. I'm the self-crowned king of mediocrity when it comes to gaming performance. I want something that can entertain and complete my workloads in a reasonable amount of time, but not break the bank.

There's a point of diminishing returns and I really think that Ryzen has brought that price point down by at least 20%, for this workload.

Asus has a 1080p 240Hz gaming display exally.

Asus PG258Q

Getting closer, but in order to eliminate the bottleneck above 350FPS, you'd need a monitor that runs above 350Hz. I'm trying to prove a point about how silly the argument that 350 to 550 is perceptible is.

I didnt really follow the exact debate about that.
But basiclly the tldr of the thread is that a certain youtuber claims that low res benchmarks are worthless.
And i think that its not really a proper claim never the less.
Low res gaming benchmarks are there for to determine raw cpu performance in gaming, with taking a gpu limmitation out of a possible equasion.
However you can of course argue about real world scenarioĀ“s in the sense that how manny people would exally buy the highest gpu and a 8 core 16 threads cpu for 1080P gaming.
I would suppose that people who can afford the highest end gpuĀ“s are generally not really targeting a 1080p gaming experiance.
But rather something like 1440P ultra or 4K high.
So from that regard, i get what certain people like Adore TV and also AMD themselfs try to say.
But of course a 4K gaming test is mainlly a gpu bound test.

I understand where it's coming from, and the benchmark definitely has it's place, but people are trying to use it as a measurement of how well it will hold up in the future and that's a poor method. Based on the way games are being optimized to use multiple cores in the last year or so, the best way to measure future performance (in my opinion) is to look at raw computing power of the entire system.

That said, it's hard to predict the future, so this may turn out to be a fine method today, but the problem is that based on the current data, it's not a good method.

1 Like

I definitelly agree.

Nobody can really look into the future.
So its basiclly impossible to make statements according to that.
There is no such thing as future proofing basiclly.
We could assume that games will be better and better optmized in the future.
But how long that will take is hard to tell.
Neverthe less Ryzen still performs fine in gaming and awesome in certain productivity workloads.
Its a massive improvement over Bulldozer / Vishera, and thats a pretty good achievement from AMD.

2 Likes

I fundamentally disagree with Adored TV and his point. 2017 games are way way more optimized for multicore cups than the 2013 games. So it's normal an 8 core to outperform a quad core.
However I agree with the point that in this specific instance, where we are talking about basically work station heavy lifting CPUs for 400-500$ low res gaming is pointless...
Like 4k will be pointless with the 130$ quad core R3 and is currently pointless with Athlon 760K...
If I have a 4K display I have a system to go with it.
Again, this is not the gaming CPU. We will talk price to performance in gaming when the 120$ quad cores come out and perform close to 320$ i7...

1 Like

Well yeah.

Thats why i think that the R5 cpuĀ“s are going to be way more interesting for gamers.

3 Likes

I'm excited for the R5 series, but am a bit disappointed that they are launching so much later. That said, I hope they're using this to tune them to get overclock ability or stock speed up a bit closer to that of the 6700 or so. That's the one thing that Bulldozer and Vishera had going for them. With a watercooler, you can easily hit 5GHz on an 8350. Hell, I have an 8320 that's doing 4.9 currently and it's a hell of a lot of fun to play with.

I think that in a day and age where we are supposed to by shifting away from the need for ipc through closer to the metal api's that favour more threads its just bizarre that people are still hung up on this.

Think hard on this because I have...

what makes even a 10 year old q6600 unfit for gaming?

if you were to test that cpu on doom with a 1060/480 running vulcan your answer would be 'nothing'.

anything else where you need high ipc to chew through inneficiency to reach a given target then yes.. its redundant.

I am just getting fed up of this game, race for ipc is a dead horse but the industry refuses to let it die.

I mean, this is a brand new architecture so AMD is working on getting all the kinks out as fast as they can. The enthusiast-level market (you know... the ones buying the r7 chips) can deal with a little adversity in this department. The typical gamer who just wants to play LoL, CSGO, WoW, Carball, etc. doesn't want to have to put up with that. So in that regard it's probably a smart idea for AMD to hold off on launching the r5 chips as they work out the growing pains that the new architecture brings with it.

1 Like