AMD Ryzen: Part 1 The Chip, The Myth, The Legend | Level One Techs

It seems like prophet Wendell might have struck again. AMD X390 and X399 chipset leak?
I hope things get very interesting.

1 Like

Applications like Photoshop or lightroom both tend to be applications requiring you to use them as the main focused task and performance is measured in response time to a filter you expect to be applied in real time as you are working on the Image. Chrome or your developer suite may be idling in the background but not hogging resources. If that type of workload is the only thing you are doing at any one time, the 7700K is probably a better choice. Even Premier with a decent GPU installed and running as a single focused application is likely to appear snappier.

Where the 7700K falls down and Ryzen benefits is if you are editing in photoshop and rendering a 4K video in the background while watching a video in the corner of the screen at the same time. you can still do that on a 7700K but Photoshop becomes laggy, filters take much longer to run as you run out of computing resources while Ryzen in a single focuse environment may be slightly slower, just keeps powering on because it has double the threads. Broadwell-E is in a similar position to Ryzen with the multi tasking capabilities but the price is double that of the AMD product.

The quad cores vs 8 cores is analogous to the difference between a sports car and an SUV. They are both motor vehicles that can transport you between A and B however, the sports car can get there a bit quicker. The SUV however can get there maybe slightly behind but while carrying a ton of cargo that you cant get in the sports car. The tuning of the SUV has not been finalized and they are trying to tune it up so it can carry 1.25 tons of cargo.

Neither approach is wrong, They both serve different needs.

Some of the furor in the tech press has come from presenting gaming bechmarks based on the assumption that they are both just CPUs. Every test they have presented has been one dimensional. How many benchmarks have you seen done to compare gaming performance while a 4K video render runs in the background for example? Tests like that, help illustrate something like the analogy that I wrote about cars in Ryzen/Intel performance comparison terms. Unfortunately No one went down that path of investigation. Instead they jumped to wrong or only partially correct conclusions and stopped looking at anything else.

As a developer, if you have to run resource intensive compiles and they stop you from doing other things while they are running, the Ryzen is the cost effective platform you should be looking at. If money is no object and you want a completely mature platform, X99 Intel should also be considered .

Is there any way I can get you guys to benchmark a program for me? It's the research program I've posted about a few times now.

Just compiled the Linux kernel in like a minute and a half. Holy shit, this thing is fantastic.

Edit: For comparison, it took my Raspberry Pi Zero like 12 hours to compile the kernel.

2 Likes

link?

2 Likes

Here's the original thread:

Right now, I have it running near 4s on i7-4770T's, but I'd like to better understand its system bottlenecks in order to decide on whether it is best to overlap executions or serialize them, as well as perf/$ on the new architecture.

Thanks!

I would like Zen 2 to use PCI-e gen 4.

What i mean is if amd release say an X470 chip set that uses pci-e gen 4 but reverts back to pci-e gen 3 when a Zen 1 chip is used. and the Zen 2 chips support pci-e gen 4 but will revert to gen 3 when put into say an X370 chipset.
This will alow faster NVME drives and PCI-e gen 3 off the chipset.

Yes. My point was mostly that running the exact same workload multiple times in a row is the ideal case for a branch predictor with memory. Because it will be able to make a perfectly accurate "prediction" given enough history.

I've also come to the conclusion that the "neural net" is simply a way to learn. Perhaps I was trying to be a bit too obtuse with my previous comment. But the idea is that a neural net will learn to predict the branches by doing past observations. So while it's not actually storing a bunch of data I though it was "cute" to think of it as a lossy compression algorithm. Perhaps it mostly made my idea more difficult to get across though. :-)

So, for part two, we're cooking up something special. :D

The Master Race tribes beating each other to death with WUVRGB peripherals live on Twich?

Brought to you live by Microsoft. Putting the Cort in your Katana.

In my experience doing 4k encoding in the background while editing images in Photoshop is not a realistic use-case. And if a 7700 is decoding a video it's doing it with hardware acceleration, so that's no biggie. (Personally when I do long edits I do watch YouTube or Netflix at the same time, but I run that on a laptop on the side.)

I have to say I'm happy that I found those benchmarks by Puget systems because I feel a lot of people in the tech community are a bit disingenuous about how useful more cores actually are. Basically anything made by Adobe is garbage for more than 4c/4t. And I've have been running a 6c/12t Intel for several years now, and have found the performance to be quite lack luster. (But again, part of the problem is that Adobe don't really seem interested in optimizing stuff.)

The issue I see is that for many workstation use-cases a 7700k may actually be the best performance, and lowest price.

And as someone who has been developing for over 10 years professionally I'd say that if it takes more than 30 seconds to do a compile your build environment is broken. Sure, if I need to rebuild a big project like the Linux kernel then it will take longer on a 7700k. But that's not really anything I do often. And even when I did develop on the core of Android (which required building the entire kernel) you only do a rebuild once a day or so at max. And then you get coffee.

Again, I'm not trying to downplay the importance or performance of Ryzen. I just found that the more I look into it the more it seems like there is a "cult of cores" with limited applicability in the real world. If you need cores then get them! But it could pay off to actually make sure that you do something that can actually put them to use first. Otherwise put the money towards something else, like faster SSD, more memory or better graphics card.

If you are running Chrome at defaults and watching youtube, The video decoding is being done on the GPU and doesn't hit the CPU much at all.

How the hell am I supposed to know exactly how you use your PC? The 4K encode scenario was simply to illustrate my point, you could be running other VMs or gaming and encoding for live streaming or doing none of the above.

How much money did you have to invest to buy a 6 core PC plus a laptop?

Just because you do not, or choose not to multitask doesn't mean that everyone wants to do that. From what you are saying a 7700K is probably better for your use case if you only look short term. I think every review I have seen has recommended the same thing as well.

What you don't seem to be considering is that 4 core CPUs are getting close to the end of the line for mainstream high performance CPUs like dual core desktop chips did about 6 years ago. Silicon is getting close to its limits at 5 Ghz, the only way to increase compute power, which is what Intel and AMD have to do to stay in business, is to replace silicon with something else that is more efficient or clocks faster or increase core count. Right now, there are things in lab development but nothing about to hit the market than can take the place of silicon, so that only leaves more cores. The 8 core chips will remain viable for a longer period of time than a 7700K will as 6 or 8 core chips become the norm.

1 Like

Or we could say "fuck you" to cores and program in assembly. /s

I want moar cores.

1 Like

Thanks nice overview, enjoyed that.

1 Like

Is there any chance to see an ASUS Crosshair 6 Hero review?

Regarding usage scenarios, are Windows workstation users stuck in DOS era? Even if the programs you use don't scale on all cores, wouldn't be awesome to, for example, render something in Solidworks, and strolling happily through the Northern Kingdoms in Witcher 3?

Amiga introduced multitasking for a "normal" price in 1985, can't understand why I must consider performance running only one major task in my PC in 2017.

The thing I find a bit disingenuous is that all reviews I've seen have stated if you want gaming, then 7700k is better. If you want productivity Ryzen is better. That seems to be the mantra even at places like here at Level1Tech.

My gripe with that is that it's not even that clear cut. There are quite a lot of productivity tasks that don't use more cores particularly well. Even really "highest end" professional programs like Lightroom and Photoshop. (And Lightroom at least should be able to use multiple cores fairly easily.)

I do hope this will mark a turn and that we will see more effort being put into multi-core work now. It's certainly about time. But I think the productivity uses more core angle is a bit over-hyped. If you really want more cores now I think the best bet might be to get a 1600x and then possibly upgrade later when future versions of Ryzen rolls out. At least if you want "bang for your buck".

EDIT: If you are doing something where you know you will have use for more cores then do that. But if you're not sure then I would find some benchmarks before jumping in if price/performance is important to you.

I agree, run a single series of benchmarks with the highest end GPU and then announce to the world that Ryzen Gaming is terrible, even though it is still better than 98% or all computers on earth doesn't do too much for their credibility. Wendel has not taken that sensationalist path though.

Reporting an observation and investigating further is not the same thing as drawing a conclusion from minimal and selective data. Ryzen, from all the benchmarks does indicate it is the best price/performance productivity machine going at the moment but that statement doesn't make any judgement on other uses of the PC.

In gaming, absolute frame rates are down compared to 7700K but in the scheme of things, 7700K is probably at about 200% of what is required and Ryzen is only at 180%. They both exceed requirements so it is quite a ridiculous thing to be worried about.

It does have one flaw/issue that could, in the long term, disadvantage the entire family of chips if AMD cant address it without re-engineering the whole architecture. That is the bottleneck they have created to the memory controller which is causing the slower gaming performance. If they have some way of increasing the Data fabric in addition to increased memory frequency such as making it 1:1.75 of the memory frequency instead of 1:2 will alleviate the bottleneck somewhat but I don't know if that is a microcode/uefi thing or if it is a physical hardware limitation.

More multi core development is inevitable I think. The challenge to get there are not the tools but, to actually find developers that have learned how to think in terms of parallelism to write the appropriate code in the first place. Thinking that way is not something anyone really learns by accident.

I certainly agree with researching before jumping in

It is still the age long problem. Software and programs have to be written to take advantage of moar cores. Still a very long road ahead. Maybe now ? Developers with learn to start leveraging moar cores. I think of all these years of playing with moar cores and how little effort has been made in that regard. We should of already address this a long time ago :(~

@wendell
Where can I get those Ryzen docs you mentioned. Can't find 'em anywhere except that OC doc.

There are a number of whitepapers and non marketing things out there. I did a quick search and all I got was https://www.google.com/url?sa=t&source=web&rct=j&url=http://32ipi028l5q82yhj72224m8j.wpengine.netdna-cdn.com/wp-content/uploads/2017/03/GDC2017-Optimizing-For-AMD-Ryzen.pdf&ved=0ahUKEwj-yNvEuPjSAhUM7iYKHUohAawQFggmMAE&usg=AFQjCNGk-6M7kngZFMaeqeHbj1duvuzvjA&sig2=nlA65s4mYqwcnSQjBG9xvA which h is a start

3 Likes

For some reason AMD have never published a diagram that actually shows how Ryzen is actually architected as a whole. For some reason they have only ever published partial diagrams that omit details that are relevant such as how the memory and PCIe connects to both cores with interconnects that could potentially create a bottleneck.

I decided that it would be interesting to put the jigsaw together and produce a data flow diagram that may help everyone understand how Ryzens IO works in its entirety.

As demonstrated in a Hardware Unboxed video yesterday, the performance issue is not caused by the inter CCX thread switching as was widely believed.

This diagram does suggest to me that the gaming slowdown is because of a bottleneck created at the 32 Byte/cycle (18.75GB/S with 2666Mhz Memory) interconnect to the Memory controllers as it tries to service traffic from the combined 96 Byte/cycle bandwith from both CCX modules and the PCIe controller when they are all running under load. Like when you are gaming.

5 Likes