Part 2 is going to be testing. Look, the testing and benchmarks on this thing was all over the place and it doesn't tell the whole story.
When we did our 1080p testing with the Fury and 980ti, the 7700k was significantly faster at high FPS, but the results were opposite in real-world testing.
However just a week later with the 1080ti, the Ryzen 7 1800x (and Ryzen 7 1800x w/2 cores disabled [3+3]) was much closer in performance to the 7700k (!?!)
So, for part two, we're cooking up something special. :D
Wait...Ryzen has a neural net for branch prediction....I am having goosebumps just thinking about it...I really really want to know how that is implemented.
I think the secret ingredient in "neural net branch prediction" is marketing. ;-)
Seriously though, I bet it mostly works as a way to compress data from previous executions. That way it can predict what path it will take in the next run. If anyone is really interested in looking into it I'd suggest experimenting with running different benchmarks in succession and see how long the prediction memory is. The ideal case for it is really that you are running the exact same workload several times in a row. Which is exactly what you do for benchmarks, but not nearly as useful in real world scenarios. (In the real world you typically render different things each time.)
I was looking into getting a Ryzen for "productivity" tasks. (Mostly because I wanted a new toy with many cores at an affordable price.) But I found some benchmarks at Puget Systems (https://www.pugetsystems.com/all_articles.php) and it seems that for my tasks of mostly Lightroom and Photoshop a 7700k performs significantly better. And in Premiere (which I'm looking to go into, but don't currently) the benefit of more cores isn't as big as I'd hope.
I do code all day as well, but honestly that's not really a big time sink for me normally. (The actual compilation I mean.)
Hopefully all of this will mean that we get some more focus on more cores and then actually get programs that use them properly. Adobe's programs are really quite poor at that, particularly considering a program like Lightroom should be able to do it trivially by just processing multiple images at ones.
Not necessarily true. In tasks that are parallelized with data parallelism and the basic nature of the data is similar, better branch prediction by learning through history can work. Most DSP tasks can easily have the same code running many times and use the same branches during execution even if the input data are different.
Also the point of the neural net is that it can learn based on history without having to store that history. This means that you might not see much change in the prediction memory (if it actually has a neural net)
EDIT: Also is there a widely available application that can see the size of the memory elements on the branch predictors? It is a bit too low level isn´t ? Let me know if you know one cause that is interesting.
Applications like Photoshop or lightroom both tend to be applications requiring you to use them as the main focused task and performance is measured in response time to a filter you expect to be applied in real time as you are working on the Image. Chrome or your developer suite may be idling in the background but not hogging resources. If that type of workload is the only thing you are doing at any one time, the 7700K is probably a better choice. Even Premier with a decent GPU installed and running as a single focused application is likely to appear snappier.
Where the 7700K falls down and Ryzen benefits is if you are editing in photoshop and rendering a 4K video in the background while watching a video in the corner of the screen at the same time. you can still do that on a 7700K but Photoshop becomes laggy, filters take much longer to run as you run out of computing resources while Ryzen in a single focuse environment may be slightly slower, just keeps powering on because it has double the threads. Broadwell-E is in a similar position to Ryzen with the multi tasking capabilities but the price is double that of the AMD product.
The quad cores vs 8 cores is analogous to the difference between a sports car and an SUV. They are both motor vehicles that can transport you between A and B however, the sports car can get there a bit quicker. The SUV however can get there maybe slightly behind but while carrying a ton of cargo that you cant get in the sports car. The tuning of the SUV has not been finalized and they are trying to tune it up so it can carry 1.25 tons of cargo.
Neither approach is wrong, They both serve different needs.
Some of the furor in the tech press has come from presenting gaming bechmarks based on the assumption that they are both just CPUs. Every test they have presented has been one dimensional. How many benchmarks have you seen done to compare gaming performance while a 4K video render runs in the background for example? Tests like that, help illustrate something like the analogy that I wrote about cars in Ryzen/Intel performance comparison terms. Unfortunately No one went down that path of investigation. Instead they jumped to wrong or only partially correct conclusions and stopped looking at anything else.
As a developer, if you have to run resource intensive compiles and they stop you from doing other things while they are running, the Ryzen is the cost effective platform you should be looking at. If money is no object and you want a completely mature platform, X99 Intel should also be considered .
Right now, I have it running near 4s on i7-4770T's, but I'd like to better understand its system bottlenecks in order to decide on whether it is best to overlap executions or serialize them, as well as perf/$ on the new architecture.
What i mean is if amd release say an X470 chip set that uses pci-e gen 4 but reverts back to pci-e gen 3 when a Zen 1 chip is used. and the Zen 2 chips support pci-e gen 4 but will revert to gen 3 when put into say an X370 chipset. This will alow faster NVME drives and PCI-e gen 3 off the chipset.
Yes. My point was mostly that running the exact same workload multiple times in a row is the ideal case for a branch predictor with memory. Because it will be able to make a perfectly accurate "prediction" given enough history.
I've also come to the conclusion that the "neural net" is simply a way to learn. Perhaps I was trying to be a bit too obtuse with my previous comment. But the idea is that a neural net will learn to predict the branches by doing past observations. So while it's not actually storing a bunch of data I though it was "cute" to think of it as a lossy compression algorithm. Perhaps it mostly made my idea more difficult to get across though. :-)
In my experience doing 4k encoding in the background while editing images in Photoshop is not a realistic use-case. And if a 7700 is decoding a video it's doing it with hardware acceleration, so that's no biggie. (Personally when I do long edits I do watch YouTube or Netflix at the same time, but I run that on a laptop on the side.)
I have to say I'm happy that I found those benchmarks by Puget systems because I feel a lot of people in the tech community are a bit disingenuous about how useful more cores actually are. Basically anything made by Adobe is garbage for more than 4c/4t. And I've have been running a 6c/12t Intel for several years now, and have found the performance to be quite lack luster. (But again, part of the problem is that Adobe don't really seem interested in optimizing stuff.)
The issue I see is that for many workstation use-cases a 7700k may actually be the best performance, and lowest price.
And as someone who has been developing for over 10 years professionally I'd say that if it takes more than 30 seconds to do a compile your build environment is broken. Sure, if I need to rebuild a big project like the Linux kernel then it will take longer on a 7700k. But that's not really anything I do often. And even when I did develop on the core of Android (which required building the entire kernel) you only do a rebuild once a day or so at max. And then you get coffee.
Again, I'm not trying to downplay the importance or performance of Ryzen. I just found that the more I look into it the more it seems like there is a "cult of cores" with limited applicability in the real world. If you need cores then get them! But it could pay off to actually make sure that you do something that can actually put them to use first. Otherwise put the money towards something else, like faster SSD, more memory or better graphics card.
If you are running Chrome at defaults and watching youtube, The video decoding is being done on the GPU and doesn't hit the CPU much at all.
How the hell am I supposed to know exactly how you use your PC? The 4K encode scenario was simply to illustrate my point, you could be running other VMs or gaming and encoding for live streaming or doing none of the above.
How much money did you have to invest to buy a 6 core PC plus a laptop?
Just because you do not, or choose not to multitask doesn't mean that everyone wants to do that. From what you are saying a 7700K is probably better for your use case if you only look short term. I think every review I have seen has recommended the same thing as well.
What you don't seem to be considering is that 4 core CPUs are getting close to the end of the line for mainstream high performance CPUs like dual core desktop chips did about 6 years ago. Silicon is getting close to its limits at 5 Ghz, the only way to increase compute power, which is what Intel and AMD have to do to stay in business, is to replace silicon with something else that is more efficient or clocks faster or increase core count. Right now, there are things in lab development but nothing about to hit the market than can take the place of silicon, so that only leaves more cores. The 8 core chips will remain viable for a longer period of time than a 7700K will as 6 or 8 core chips become the norm.
Is there any chance to see an ASUS Crosshair 6 Hero review?
Regarding usage scenarios, are Windows workstation users stuck in DOS era? Even if the programs you use don't scale on all cores, wouldn't be awesome to, for example, render something in Solidworks, and strolling happily through the Northern Kingdoms in Witcher 3?
Amiga introduced multitasking for a "normal" price in 1985, can't understand why I must consider performance running only one major task in my PC in 2017.