AMD FX 8350 versus AMD Ryzen 7 1700 Scaling on Windows 10

AMD FX 8350 versus AMD Ryzen 7 1700 Scaling on Windows 10 1703

This posting is part of a series of posts meant to explore the following topics:

  • Testing designed to compare FX and Ryzen scaling with various workloads.
  • Testing designed to compare GTX 660 and GTX 1050 Ti scaling with various CPUs.
  • Testing designed to compare Windows 7 and Windows 10 under real-world idle conditions.
  • Testing designed to compare gaming/encoding performance while encoding (under CPU load) in the background.

Level1Tech Threads:

External Topic Index:

Disclaimers

The following benchmarks were performed with the following hardware configurations:

BenchmarksHWConfig2017.png

  • Windows 7 Sp1 Updated, Windows 10 1703 Updated
  • GeForce GTX 660 and 1050Ti both at stock frequences.
  • Tests focus on real-world configurations and actual usage variations, not solely hardware component isolation. For that, check out gamersnexus.
  • 1% lows, 0.1% lows and standard deviation calculations (for accurate error bars) not performed due to data analysis and time limitations.
  • For full disclaimers, detailed configuration information, and results data please see the raw results Google doc. Tabs exist.
  • Regarding MetroLL and Ryzen's SMT.

Synthetic CPU/Memory Benchmarks

CPU-Z Single and Multithreaded


Passmark CPU Score

  • Passmark CPU is not a great benchmark.
  • Tests which should be identical, like switching the video card in the FX systems, show a 100 point difference which is about a 1.6% expexted margin of error.

Passmark Memory

  • So increasing clock speed helps memory performance marginally but changing GPUs does not. Interesting.

MaxMEM2

  • My FX system has lousy memory writes.

7-Zip Benchmark


CineBenchR15 CPU Multithreaded

  • Ryzen 1700 @ 3 Ghz with SMT=off is about 1k cb. SMT increases raw performance by about 40% in apps that care about threads.

x265 Encoding Time

  • Do not use dual-cores for encoding.

x265 Encoding FPS

  • Ryzen 1700 @ stock has exactly twice the performance as FX 8350 @ 3.4Ghz.
  • Given some rough pixel calculations and that these are my typical clock speeds for both system, Ryzen does 1080p in the same time frame as an FX system at 720p. Hello 1080p HEVC.

Synthetic GPU Benchmarks

Passmark GPU

  • This is what proper scaling looks like. Every cpu core upgrade, clock speed increase and video card increase registers.

CineBenchR15 OpenGL

  • No. Just no. What is this synthetic benchmark even supposed to be measuring? What real world app does this?
  • The margins for error are also very large with this test.

3DMark Firestrike Score

  • Firestrike is an excellent benchmark. It shows perfect scaling whenever increasing CPU frequency, architecture and/or changing GPU with very small error margins, regardless of background load conditions.
  • This benchmark is how games should perform if perfectly optimized.

3DMark TimeSpy Score

  • My 4850e is missing some instruction sets necessary for DX12 :(.
  • Perfect scaling, just like the Firestrike test.

Unigine-Heaven FPS

  • So the fundamental effect of increasing CPU performance means that one is getting better frame times/minimums.
  • Since minimums are part of what determine playability, then it seems like upgrading the CPU would be important. Except that doing so results in marginal gains compared to upgrading the GPU in terms of averages. So this implies, there exists a balance between the two.
  • Also: averages calculations for this benchmark do not take into account the minimums adequately.

Unigine-Heaven Score

  • The score completely ignores the minimums and related scaling.

Games

Tomb Raider

  • Tomb Raider does not care about your CPU.
  • This is one of the few games that actually is playable at 1080p on Ultra with a $45 2008 dual-core.

Metro Last Light

  • Regarding MetroLL and Ryzen's SMT.
  • This graph brings to attention so much wrongness in the world of PC gaming. Win 10 has significantly better minimums compared to 7. AMD's SMT on Ryzen does not play with MetroLL and MetroLL will not get updates to fix that. Disabling SMT causes other apps harm. Reenabling requires a cold boot. Charts with minimums do not adequtely describe frame times. The "Average FPS" calulations methods reported from games sometimes do not take into account minimums and stuttering.
  • Windows 10 does an excellent job at dealing with this horribly unoptimized game, even while also having to deal with Ryzen's SMT.

Shadow of Mordor

  • In Win 10, the gains from a better CPU at higher clock speeds are both marginal and consistent, exactly how things should be.
  • Note that an 8350 throttled to @ 3.4 Ghz with a 1050Ti outperforms a Ryzen 7 1700 OC'd @ 3.7 Ghz with a GTX 660 substantially and both shift the bottleneck to the GPU at all clock speeds. Assuming you have an FX 8350/8370 with a GTX 660 or 1050Ti, a better video card would net exponentially better gains than a more modern CPU.

Ashes of the Singularity Escalation

  • The scaling in Ashes is very similar to synthetic GPU benchmarks.
  • The 4850e shows what a real cpu bottleneck looks like in this game. Going from a GTX 660 to a 1050 Ti nets a 0.15 fps improvement which is within margin of error.

Conclusions

  • Don't expect significantly better performance in games upgrading from an FX CPU to Ryzen. Marginal, yes. That said, the minimum frame rates can increase dramatically, depending upon the game resulting in smoother gameplay.
  • If a game stutters constantly on Ryzen, try turning off SMT. Note that a cold boot is required to reenable it.
  • An 8350 @ 3.4 Ghz does not bottleneck modern entry-level/low-mid video cards like the 1050 Ti. A bottleneck would probably start to manifest around the 1070 or above based upon this and other benchmarks I have seen. In terms of minimums however, the story remains untold.
5 Likes

This is due to the Odd RAM configuration on your Fx systems mainboard. Don't mix and match different RAM.

The GPU is completely unrelated to this test, it's purely testing the CPU to RAM transfer, the PCI-e bus is not part of this :wink:

Encoding settings used?

I would suggest using a standard video such as the Blender Project Durian (SIntel) avi video for standarized testing, that way community member can replicate the test.

OpenGL is highly CPU dependant, particularly in Cinebench.

This is almost impossible to do actually for non-synthetic (ie interactive world games) compared to well constrained synthetic benchmarks.

Something funky is going on there, DX12 is not bound by CPU capabilities(it's bound by GPU capabilities) and 3DMark timespy requires a CPU with SSE3 which your system does have.

Maybe post an exact screenshot of whatever error you get here, with the right GPU it totally should work.

In most cases here you are GPU bottlenecked.
Most game/engine pipelines work in this fashion

Disk ↔ CPU ↔ RAM
        ↳→ GPU
         ↓ ↑ ↓        
         ↳ VRAM

The CPU has to fetch data from Disk to RAM, then perform transforms on the data in RAM and transfer data as needed to the GPU.
The minimum frames are times when your CPU took longer to transfer data to the GPU.
So in general increases in CPU and Memory throughput and compute improve the minimum frame times.
(This is a simplification of computer architecture but it serves to explain the situation better).

Well spotted. Not many testers know this, but the scores are based on the average framerate, short term event minimums get averaged out over time since they are short occurrences it's hard and doesn't make sense to bias the score as a result of this. Tests with better(shorter length) minimums do better in the tests anyway and have higher scores.

This thing is an abomination of how not to do Concurrency programming, don't even use it to test CPU performance.

This game isn't programmed to deal with the Complexities around SMT and has issues with complex CPU architectures. Setting core affinity on this game to only 4 physical cores is advised :wink:

Not statistically significant nor enough data to draw any conclusions like that.
Much better is:

A Better GPU may help overcome the GPU bottleneck to get you more FPS on a modern CPU than an older CPU.

That is all one can say there with certainty. It may not get you better FPS on the older 8350 for who knows how close from a CPU bottleneck you are on that architecture.

This game is optimized for Ryzen, Multicore CPU's and SMT/HT.
The effects of a GPU bottleneck are very apparent however, with the minimums and average only ~5fps apart and likely sitting somewhere on 33fps line with peaks above and below to get stated averages.

Conclusions:

Don't buy a high end R7 1700 8 Core 16T CPU and then use an old or budget GPU with it. :wink:
A High end CPU requires a high end GPU.

A GTX970/GTX980/GTX1060+ or AMD Radeon 390, Fury or Rx580 and up is Required to realize your CPU's full potential.

3 Likes

Quantity matters more than speed for most workloads that I do, especially VMs. Given that I am very sensitive to pricing, odd RAM combinations are unavoidable sometimes. Even my Ryzen system shows the same lopsidedness in this area and I have no intention of fixing it, but rather making it worse since I need quantity more than speed going forward.

Most details like that (version number and settings) are in the google doc method-something section.

x265 v2.4 AVX, 2160 1080p frames scaled to 720p and encoded at crf=17, yuv44p, preset veryslow, 10-bit. The 4850e CPU lacks AVX so I used the normal, non-AVX x265.exe

I tried to state the obvious if there was not anything worth commenting on and kept my comments related to that specific application and that specific chart. Different applications of course respond differently so, with some exceptions, I tried not to generalize.

For DX12, the issue with the 4850e is actually reproducable and present with all CPUs Phenom II and older regardless of application. Gamersnexus had the same issue with DX12 testing very old CPUs. So this is clearly an architectural issue, not purely application level. Sometimes DX12 apps refuse to run, or will just crash, other times they finish, just barely.

Tomb Raider is awesome. I approve of their architecture. If I ever decide to install MLL again, I will try core affinity compared to disabling SMT and compare the results.

I do 99% productivity (VMs/encoding/cpu rendering). High end GPUs scale up roughly 0% with x265 and vmware and thus are a complete waste of money, hence buying a 1050 Ti.

For that specific game, under those specific conditions tested, it certainly is. A 8350 throttled to @ 3.4 Ghz with a 1050Ti will always outperform a Ryzen 7 1700 OC'd @ 3.7 Ghz with a 660 in Shadow of Mordor Ultra @ 1080p on Win 10. Always. Different games/settings are a different story.

edit: typos

Screams Internally.

Thanks for Linking

I should introduce you to the wonders of GPU encoding or offload :smiley:
It's perfectly utilizable hardware you are neglecting there. Modern GPU's can do very respectable x265 encode (and decode). I'm talking +360fps here @1080p, sure it's not as high quality but the difference is so tiny for basic testing projects etc. I use it a lot for test runs of footage to see what my effects etc look like.

No offense, but you're futzing with data here:

VS

Focus on: would net exponentially better gains

Should be:

Assuming you have an FX 8350/8370 with a GTX 660 or 1050Ti, a better video card may or may not yield better framerate than a more modern CPU with stated video cards.

It is not guaranteed to gain you something since you have not tested an 8350 with a better GPU, you might assume and know this to be true, but you cannot conclude this from your data alone. Stick to the limits of your data. :wink:

For all we know a CPU bottleneck may also occur immediately form a GTX 1060 onwards, thus now requiring a better CPU again.

1 Like

This is an interesting Phenom-enon I guess.
I can say that the older Intel Core 2 Duo with SSE3/SSSE3/SSE4.1 have no such problem, beyond just being dreadfully slow. :laughing:

PS:

Why did you copy paste the same topic with minor variations 4 times across the forum?
This is not how we do things here, I'll let it pass but constrain yourself man :slight_smile:

If you want likes one comprehensive post with this spreadsheet prominently featured will do you wonders!

You can make a 'Wiki' post and add everything you'd ever want or even better use your blog as the information source and have the forum topic here feature the discussion around it.

2 Likes

This is very helpful.

I think we are getting cross-talk on formatting issues. Take a look at the project goals:

  • Testing designed to compare FX and Ryzen scaling with various workloads.
  • Testing designed to compare GTX 660 and GTX 1050 Ti scaling with various CPUs.
  • Testing designed to compare Windows 7 and Windows 10 under real-world idle conditions.
  • Testing designed to compare gaming/encoding performance while encoding (under CPU load) in the background.

In line with those objectives, each set of charts is actually a completely different topic. The 4 threads I have, 8 actually with Win7, are about:

  1. Application-specific performance
  2. Operating System Comparisons
  3. Idle vs Load performance
  4. Video card scaling

These are not the same topics and every chart in each thread is unique, as are the associated conclusions. They are also in different categories to reflect that.

For this set of charts, every graphic tries to analyze how exactly the application responds to hardware changes and the bullet points below tend to summarize it. The application's graphic should indicate in what one should invest in CPU/GPU/Operating System/Over Clocking to maximize that application's performance. There are also bullet points at the very end of all of the graphics which are a short summary of all of the graphs.

The conclusions specific to each application can be quite provocative. For example, investing in a better GPU shows 0% scaling in x265. Investing in a better CPU shows 0% scaling in Tomb Raider. The conclusions overall, tend to be more balanced:

"[For the 8350 in games, a] bottleneck would probably start to manifest around the 1070 or above based upon this and other benchmarks I have seen. In terms of minimums however, the story remains untold."


For Shadow of Mordor on Ultra @ 1080p, the FX 8350 @ 3.4 Ghz is enough to shift the bottleneck to the GPU. It makes sense that a better GPU will lead to better framerate, exponentially so and this is supported by the data. This is implied when comparing the gap in performance between the GTX 660 and 1050 Ti, which is massive (61.4%), and the marginal gains by increasing CPU performance (5.9%).

It is reasonable to assume that trend of high % gains for the GPU, and low % gains with the CPU will continue with GPU upgrades until some point X. After GPU upgrade X, the % gained by upgrading the CPU after that would then equal or would be greater than the % gained by continuing to upgrade the CPU. It is reasonable to wonder what that point X is. It is unreasonable to argue that a marginal upgrade from the existing system could suddenly make the seemingly massive % gains from GPU upgrade unlikely, because the game continues to be GPU bound as the FX->Ryzen benchmarks indiciate. If an exponentially powerful video card was added, could the % performance increase in upgrading the CPU overtake that of the GPU? Yes. But a marginal video card upgrade in a GPU bound game? No.

That conclusion does not suddenly become invalid because I do not have a 7700k and 1080 Ti lying around to test with. Want to know who does?

We should not pretend to not understand the obvious implications of the data in front of us. If you really want to know how to increase application performance relative to available hardware, which is the entire point of this set of charts, then ignoring the obvious conclusion of how to do so is tantamount to rejecting the data itself. In which case, you better have your own data to counter. Wanting Shadow of Mordor to show 0% GPU without a better CPU will not make it so. Heck, that game even shows very respectable GPU scaling on a lousy 2008 dual-core. Compare that to Ashes which really does show 0% GPU scaling on a 2008 CPU. What the application needs to perform better is obvious, and ignoring that conclusion is simply covering one's ears and eyes to what the data suggests.

It is also the case that benchmarks are not performed in a vacuum. As it turns out, Humans normally build upon the work already done by other people. For that reason, it does not make sense to limit the conclusions made to one's own set of data but rather, it is a major goal of benchmarking to combine data with others, both to validate it and to fully understand whatever is being tested.

In that spirit, while this GPU scaling is implied by the data above here is Shadow of Mordor GPU scaling with an Intel i7-5930K ($600 cpu):


Source: gamersnexus

Shadow of Mordor with a 1050 Ti on Ultra @ 1080p in Win10 does not go far beyond 60fps, regardless of CPU, whether that is a 8350 @ 3.4 Ghz or an OC'd $600 Intel i7. Want better performance? A better GPU will give you exponentially better performance than a CPU upgrade, as was implied by the original charts above and has now been demonstrated with this chart.

The point being, this graph was not necessary to know a better GPU would give better performance in Shadow of Mordor and a better CPU would have marginal impact. It should be obvious what the primary scaling factor is when looking at the application-focused graphics above.

How does x265 scale with better GPU than a 1050 Ti? How does Tomb Raider scale with CPU capacity and utilization? What does the Ashes scaling look like? These are answerable questions, especially when considering the data other people have already published.

Unrelated: hardware HEVC encoding has lousy quality and relatively large-bitrates for what it is, so thanks for pointing it out, but if that was sufficent for my use case, I would just have purchased a new video card and not bothered with a new system.

edit: typos and carification

First thing I want to make clear. I am here to critically analyze your data and your testing methodology. There is no need to get offended.

There are reasons I am asking you these questions in such a manner and writing certain things. It is to look for holes in your testing, and so far your data & reasoning has held up very well.

You are only missing a few small details & a hypothesis as part of your overall test. :slight_smile:
A hypothesis is kind of slightly very important

Fist thing: It is logarithmic, not exponential. Performance improvements plateau at a certain point.
Minor but important distinction.

Second: Your test is not a test with an i7-5930K yours is a test with an FX-8350 and various other CPU's. Nevermind the entirely different rest of the test system. They may have GPU's tested in common, but a reference point to how an Fx-8350 will behave it is not.

There is a presumption you make and you have no need to make it, your data stands fine on it's own. It's also far more comprehensive than most Gamers Nexus articles.

Here's an example chart from the below link for various CPU's compared across a variety of tasks.
http://www.anandtech.com/show/8426/the-intel-haswell-e-cpu-review-core-i7-5960x-i7-5930k-i7-5820k-tested/5

Here source of the chart you linked

Now going back to that article regarding the GPU tests.

They were using a i7-5930k under windows 7 with a game that did not receive patches since then.
On said test a 1050Ti Gx was getting around 53fps on very high @1080p.
Yet in your tests with the Fx-8350 under windows 10 with your 1050 Ti you were getting almost double that on very high @1080p.

All I can say to this:

Why, Microsoft, do you do this?

Overall I think you've done a very good Job! :smiley:

Just don't assume others motives so quickly

I did not mean to come off as confrontational. If I did, I apologize. I am also focused on the data and some of your comments showed you had not read some of my disclaimers/data. That is still apparent.

The i7 5930k vs difference for non-MLL games is meant to represent that decreasing the CPU bottleneck further will not result in any significant games. Better WinRAR performance, or in my case 7-Zip, will not yield significantly better gaming performance, as my charts above show.

Metro Last Light has issues. A lot of them. I deliberately avoided using MLL from gamersnexus's charts due to the following reasons:

How Windows 7 responds to it is different than Windows 10, which shows substantial, honestly, astonishing, improvements over Windows 7 in that specific game. I tried to minimize the factors in play, so that meant sticking to the Win10 1703 benchmarks. How FX responds to it is different than Ryzen since MLL does not handle SMT well (default settings) and I did not test Intel's Hyperthreading or do extensive testing with SMT disabled. The scaling, while both being unoptimized and oddly CPU centric as far as minimums are concerned relative to the other games, hurts average frames rates (as it should) making averages also incomparable. Minimums, while related to, are not actually directly comparable to 1% lows,or 0.1% lows. The minimums, 1% lows, and 0.1% lows do not show the stuttering mess that is MLL due to how MLL calculates. There are times when a Ryzen SMT enabled system on Win 7 gets <2FPS. The entire screen just stutters for half a second or more.

The MLL numbers are completely incomparable, mostly due to issues related specifically to MLL, Ryzen and how Windows 7 and Windows 10 interact with that specific game.

Ryzen is incomparable, and the minimums are incomparable and that MLL does factor in minimums into it's averages so Win7 vs 10 is incomparable.

But...I did actually do Windows 7 benchmarks for the game and the Averages are still comparable directly, as long as the OS doesn't change. Want to guess what they turned out to be in Win 7?

The 8350 @ 4.0 Ghz, in Win 7, shows that same GPU bottleneck for averages with a 1050 Ti that a $600 Intel i7 5930k does. 58.80 FPS vs gamernexus's 55 FPS. ~4 FPS difference, within margin of error for different test beds, not "double the performance." My Win 7 vs Win 10 chart list show this, except for in MLL, due to a lot of specifics related to Metro Last Light. Except for MLL, Win 7 does not perform better or worse than Win 10 with a 1050 Ti in other games, meaning combining data with gamersnexus on the Shadow of Mordor scaling works because the tests are comparable overall.

Does this mean an FX 8350 a better CPU for gaming than a $600 i7 5930k? No. What it shows is a GPU bottleneck and that increasing the GPU, as stated in gamernexus's GPU scaling chart for that game will net larger performance gains, by several factors, than a CPU upgrade.

Wenn it comes to gaming performance it of course all depends on which particular gpu you use,
and at what resolutions and settings you play the games.

Agreed. Hopefully my charts can help people understand that and focus more on the components that scale best with their specific applications. For games, that is almost always the GPU. A Ryzen upgrade can help minimums in some cases, but it is still mostly the GPU given even an aging FX system. For CPU bound productivity, the reverse tends to be the case since those workloads scale directly from the CPU and could not care less about the GPU involved.