Ashes Patched for Ryzen. Massive performance increase

Ryzen performance in gaming was lower than Intel at 1080p. This was well documented. However AMD claimed that Ryzen being so new and different requires different optimizations. Makes sense. Although many did doubt this and instead painted it as some architecture flaw.

Well here it is. Ashes is the first game patched as we see something like a 20% performance increase. Yeah it is true no one plays ashes but the point is performance can and will improve. Let's hope developers take the time and do it.

These results are from PcPer.

11 Likes

Looks more like a bugfix to me :-)

Ashes was weird as Ryzen performed badly in it. As Ashes being the game that has incredible core scaling, most people expected Ryzen to perform well in it. Those new numbers are more in line with expectations. Interesting too that the fastest RAM give even better results (that Fabric bottleneck). Repeats the need for fast RAM to get that Fabric clock up.

Meh, no frame time analysis. Ryan did state in the comments that there was no time though, as the patch dropped last evening time for them.

1 Like

It certainly demonstrates the fabric bandwidth improvements. Remember all the "bad gaming" pronouncements were mad when they were running 2133Mhz or 2400Mhz ram.

It is also a fact to that if you compile application software with the Intel compiler, AMD Cpus tend not to get the same benefits of leveraging the same accelerated functions that Intel code gets to benefit from. The Ashes code was previously compiled before Ryzen was even a thing, so I am not surprised to see improvements in some applications after a new release or patch that does take Ryzen optimizations into account.

2 Likes

What would be interesting to see is if this patch also improved performance for Intel Xeon 6+ core and older Fx Vischera chips. That could tell us a good bit about the nature of the optimization such whether it was a Ryzen specific architectural adaptation or a general bugfix that affects other more cache bound systems too.

Well they compared to a i7 6900K, so don't seem to effect the intel side that much.

On It's own that doesn't tell us much though, it could be that it was hitting the GPU limit at 80. There's also no 'High' settings benchmark for the 6900K which is rather important if you include two settings for the CPU's tested but omit one CPU from one of the tests.

Dan Baker from Oxide tweeted:

So no "special sauce" for Ryzen.

Details:

So it actually is more of a bugfix than a optimization for some specific uArch.

Hope to see more of this optimization. AMD needs devs to not drag their feet with patches right now.

When you run a compiler you include various flags that tell the code to do various things depending on the CPU it is running on. It allows the code to run in an optimal mode that is applicable towhat ever the particular hardware you are running on but allow you to also only have to manage one version of the code.

If the CPU is new and the code was compiled before the optimization flags existed for Ryzen then the CPU will not use the optomized extensions and will go back to the generic optimized mode. A similar situation exists when you compile with the latest CPU that has a nex extension like the newest AVX2 instructions. These will be used on the new CPU but if you are running on a sandy bridge that does not have AVX2 then the code will run in non optimized mode

It is not a bug fix, a pre ryzen software build would not have any flags enabled specifically to make best use of what Ryzen has available. The new version would have been compiled using the intel flags and the Ryzen flags to make optimal use of Ryzen's instruction sets which It would not have been doing before.

Intel got into trouble with their compiler in 2010 because the compiler would only run optimizations on a "genuine Intel" chip and force code to run with the slowest method on existing AMD and Via CPUs
http://www.osnews.com/story/22683/Intel_Forced_to_Remove_quot_Cripple_AMD_quot_Function_from_Compiler_

I know compiling cause of Gentoo. :D

So the fixing of the nontemporal memory writes is just a compiler update? Sounds like it would be a bit more work. I'm not familiar much with windows compilers, gotten kinda used to how it works on Linux with the automatic optimizations, AVX etc paths are auto added, you don't need to specify compiler flags for them. If the CPU has it, it will be used.

Edit: And I don't think Oxide is using the Intel C compiler, but I can be wrong.

Ryzen, being an immature platform, has a number of things that were hobbling performance on day one. Over time they will be addressed and the performance will improve relative to Intel Machines. I believe that Ryzen will be significantly better in 6 months after it has had time to mature a bit more.

Faster memory support with updated bioses help with the gaming issues by increasing Data Fabric bandwidth and reducing the bottleneck. Productivity workloads were not effected negatively but can also improve. If they can get 3600Mhz Ram running at default cpu clock speeds will be very helpful for performance.

Software optimizations will help speed things up by allowing Ryzen to make use of the optimized instruction sets as opposed to running in the base generic mode. I used Intel compiler as an example, It doesnt matter what compiler is being used if the application is not compiled with the relevant Ryzen flags, It wont benefit from the optimized instruction sets.

Bugs that will be discovered in windows and applications due to the new architecture will also have fixes rolled out over time.

Compilers for Linux also use flags to modify performance and access to different features. The principle is the same.

1 Like

I can't find anything about what did they do, also nobody has tested Intel memory speeds.

Yes I know you want to let me know how great Ryzen is but really, explain & show if it does or does not improve Intel memory performance. Irritating.

Best info so far comes from that @Pholostan's cosplayer alchemist cat. :D

Its a shame to see the state of tech journalism (and I suppose journalism in general) nowadays, its clearly not all about the clicks and the money either because if that were the case you would see this being reported by the major outlets, because that would generate page views and create more advertising money.

And it's obviously not that every outlet that hasn't published this is shilling, I get that some outlets might want to wait for new drivers, and bios revisions and microcode updates and game updates and os updates and so on and so forward before retesting, because obviously there will be further developments and retesting a million times is bad. And they will probably retest when the Ryzen 5s come out...

But nevertheless it is very damaging for AMD when people google about ryzen and they end up on toms hardware, and they look at a graph that shows it performing worse than the intel parts when even though that was a fair representation of performance at the time of review, when in the present and under equally favorable circumstances (knowing that ram speed makes a big difference and not all intel quads get to 5Ghz) the difference is in most use cases non-existence.


Reviewers aren't communicating that, they aren't communicating that if you don't have a $750 1080ti and a $500 Asus ultra wide 120hz gaming monitor, then the difference is meaningless, i.e. for most users. They aren't looking into FCAT, frame variances, the standard deviation of frames and other important things. It is ok for the intel quad to be a little bit faster on average, if the CPU struggles every now and again then the range of frame rates is going to be fairly large, and it is those swings in FPS and the dips that are going to effect your experience more than a slightly lower average.

I just don't think the industry has done a good job translating the experience into their reviews. And they never mention pricing of motherboards, they won't mention that a comparable motherboard is much cheaper on the red team, or that you don't need an expensive board to overclock... They should look at the total cost of the system..


I think if these things were explored and the total cost were compared it would be quiet a big difference. Hopefully system build comparisons do a better job showing the value that AMD offers... And with Vega I expect there to be some good ~1K UHD or High-frame-rate 1440P builds (provided you turn the settings down a notch) on team red....

And you can tell AMD is frustrated, because that is what they are trying to provide, and they have made big improvements in communicating that, but the media ignores it and focuses on which is the 'best'....

1 Like

The test of Intel memory has been in every review of Ryzen since release. If you look at the graphs in the extreme section, the Intel Benchmarks are the same as they were in the older version.

Made it to wccf above the fold

It seems to me that Ryzen has come along and all the tech publications have followed the fixed "reviewers rules" that presents a "review" combined with "psudo scientific analysis". At no stage has anyone bothered to stop and thing about what they are actually testing and what they are actually saying because that have memorized the process really well so there is no need to the writer to have any understanding of what is actually happening inside the "black box" and no need to actually think.

If you compare the reviews, they all follow exactly the same formula and all make exactly the same conclusion. The couple that try to explain why, proceed to ignore half their data that contradicts what they are actually saying. When the results and methodology is questioned by the public, because the conclusion ignores the contradictions to what they are saying, instead of stepping back from it and having a look at the issue from a different angle, they all rely on their adherence to the "formula", circle the wagons and defend their poor position. All of them steadfastly holding on to the wrote learned process while refusing to think through what they were actually doing and scratching their heads as to why the internet was going Feral.

I don't think shilling or click bait was the primary motivation, I think that they truly believe that that were doing a professional job. The thing that has tripped these guys up is that in trying to explain the performance anomalies in their summation, they have not considered the bit that connects the CPU and GPU and not considered that a CPU threads do not know what program is running, it just knows that there are a streams of instructions arriving for a core to process. How can a CPU decide to slowly switch a thread and be naughty only if you are playing a game?

In all the reviews I read, they all mention that they were using a Titan XP or 1080Ti at 1080p and they all say that they are doing that to stress the CPU. Some made a weak attempt to try and explain but none of them actually made clear that the point of the gaming benchmarks at 1080p in the CPU review was not to demonstrate how well you will find this as a gaming machine but to stress the CPU and connectivity to the Memory and GPU under a 3d graphics load.

Especially as they were saying this could be a replacement for gamers i5-2500K CPUs, they could have followed up with a real world "gaming experience" section with a selection of GPUs. Instead they just tested with the fastest GPU available, they failed to consider that a $1200 GPU is not representative of all possible gaming performance scenarios even though they are pitching at i5 sandy bridge gamers, and made a blanket statement that Ryzen is terrible at gaming without even trying a lower end graphics card.

Based on what you have written here, Even you have looked at the reviews from your own perspective, looking at the review to see how good a gaming machine it is, that is understandable, you want to know if it will meet your needs. The initial reviews did not do enough work to evaluate "if it a good gaming machine" properly, but they led you to believe that they did by the way they wrote their conclusions.

If they had reported the difference in gaming benchmarks and said we are not currently sure why this is happening, we are working with AMD and the motherboard vendors to identify what is going on. The Motherboards are currently receiving a constant stream of bios updates so we will say that productivity performance is great but we will reserve judgement on gaming performance until we have explored more possibilities and understand if this is due to the immaturity of the product or is some sort of fatal flaw. I do not think that anyone would have gone mental and they would have come across as being professional and having integrity. Instead the media itself has become the story.

AMD are probably just as much to blame from their poor communications and project management. They published lots of "technical" powerpoint slides talking up neural networks but none of it really explained the data fabric which is the "glue" that binds everything together. Lots of words without actually saying anything.

They could have said they decided to go that way because, they believe that they can provide a wider ranging selection of products at prices the market deserves and everyone would have though of them as heros. It seems to indicate to me, that Jim Keller a really flexible environment that can be leveraged to make many things, I'm sure ie documented it but no-one, at least in the marketing side of AMD has a complete grasp on what potential has been designed into the architecture.

I think that this has also shown up that the methodology the media has adhered to to evaluate CPU performance is flawed around the edges. It does require some thought and understanding rather than a blind adherence to a predefined process.

Generally speaking 1080p with fast GPU will show up limitations at the CPU end of the chain. The flaws starts with them not recognizing that there is a chain and assuming that it is only the processing cores. That had not been really evident when they only compared Intel with Intel because the architecture of the chain is basically the same. Memory speed does not have a hugh effect on performance with Intel Chips but they do on the AMD chips - ignoring that fact is part of the problem they created for themselves. When you compare different technologies/architectures that achieve similar results, it is not safe to apply assumptions made with one technology onto the other similar technology just because they spit out the same answer.

3 Likes

That WCCF headline suggests that Intel runs well without optimized code. They have not learnt anything yet.

Why not report that Ashes code existed before there were Ryzen optimizations. That the patch brings the new optimization options for Ryzen up to similar levels as were enjoyed by Intel CPUs?

I see those bars as Ryzen being patched to work equally to 6900k and thats all it shows, but it kinda gives this picture that memory speed is this Ryzen exclusive thing. I want to see 2400MHz and 3200MHz from both and on top of that if Intel supports higher memory that also.

As I said this is irritating.