AMD vs INTELL to the core!

Now that I have your attention ;-)

It's generally known that intells i7 blow anything AMD clear out of the water in benchmarks, both gaming and productivity. (big generalisation, exceptions aside)

As I understand, most "productivity" programs (I myself use alot of Lightroom and CS) in general do not * use (*fully, optimal, at full potential, at all) more than 4 core's. Since intell has the better core's AMD falls behind in benchmarks.

But what if, in real life productivity, you would want to run more than 1 of those programs at the same time?
Does that work in AMD terms like: Lighroom uses 4 out 8 core's, rendering video uses the 4 remaining core's?
What happens with intell's 6 core?
How would productivity benchmarks look like, if you would run 2 simultaneously? Would the scale tip more towards more core cpu's? or not?

Your thoughts please!

Thank you so much!

When running multiple programs intel still has the upper hand because of the better memory controler, hyper threading and generally more advanced cores.

It would still tip towards the Intel architecture. Its not because AMD is falling behind because programs don't full utilise the cores it is entirely down to their architecture.

As you said your self "intel has the better core's" this is due to their higher IPC (instructions per clock). As AMD has not been able to affectively increase their IPC as well as Intel. They decided to increase the number of clock cycles occuring (aka the Core clock) but to little avail.

The situation just gets more complex when you involve Intel's hyperthreading technology (on the i3's and i7's). Which shines when the operations being execute do not need to be returned in a serial manner. (Basically just think of it as a way of shoving more through one processor) Learn more Here (bear in mind that a hyperthreaded CPU appears as two logical CPUs to the processor for every core and can be addresses independently just like AMD "8" core)

Not totally accurate. Hyperthreading means 2 threads per core. Operations not being done in the strict order in which they were written is called out-of-order execution, and is done by all modern processors. Being able to shove more than one instruction through a core at once is what it means to be a superscalar architecture. Both of these concepts have been the norm in CPUs for a decade or so. Even ARM CPUs have it.

Hyperthreading simply allows an Intel core to feed these resources from two separate threads, it does not increase the computing power of the core. This works because it is nearly impossible to keep a CPU busy all the time. A single thread inevitably has to wait for data from memory, or for a timer or a longer instruction (like division) to finish. This wastes the CPU's time. By allowing instructions to come from more than one thread, usage of existing core resources improves, sometimes dramatically and sometimes not so much.

Depends on program. If your program is using lots of memory bandwidth then running second one will not help in speeding your work. Second program can trash cashe and both programs will take longer to complete than running them in sequence.

Thanks for your answers guys. I have absolutely no idea exactly how a processor works, that's why I asked.

Welkam, running both programs in sequence will most likely be the fastest way to get them done. But I do mean real live situations.
Personally I don't start a batch operation and sit back to wait for it to be done and than start working on the next.
For example, While a set of files is being converted, I'd like to compress the previous set of images (both actions require no extra input/time) and while those are running, do some touch-ups in photoshop, lichtroom, Silverfast, ... The PC will be done with the processes faster than the work I am doing either way, but how will it affect each other and how workable would it be. And will more cores help for it to run smoother, or will the difference be less between?

This was a highly dumbed down response.

I cant find benchmarks in short time of your mentioned workloads so I will try to give you my best guess. More cores will help, but what will help more is more RAM and faster than 1600. If you have more ram you can create RAM disk and load current files on it. This way you don't need to access slow hard drive and thus improving performance. All workload you mentioned need memory bandwidth and I suspect there will be some hiccups when you do touch-ups while other stuff is running in background if you use fx 8 core or i7. If you want to know how it might fell then grab a laptop with i3 and single RAM stick and do your touch-ups on it. This way you can get most accurate fell of how it will be on higher end machine with tasks running in background.

At this moment AMD is reducing price of FX-8350 and price to performance ratio you cant beat it. You better of spending extra $ on RAM or SSD and that will give you snappyer computer than spending more on CPU. The downside is that it has no iGPU so you have to put a GPU in system.

http://youtu.be/iCrOAng0kdQ

That said i7 is faster. And if you go with i7-4790 then its almost twice as expensive as FX-8350. It wont be 2x faster tho. If you are planing on buying whole new PC with lots of RAM, SSD and GPU r9 290 or gt 780 or better then paying 150$ more wont be that much and you get faster CPU and a bit lower power bill (look at video below). If you are not planing on gaming on the same system and dont have spare GPU than i7 still might be better option because it has iGPU.

http://youtu.be/fBeeGHozSY0

x99. If you have money to spend go with it. i7-5820K with 4 channel DDR4 will convert pictures faster and wont interfere that much with your touch-ups. Only con is it rapes your wallet. You will feel that someone robed your bank account if you go with this option.

You work a lot with pictures and I want you to have an eye on APUs. The total processing power of A10-7850k is 856 GFLOPS (i7-4770 has half of it). But there is almost no software support so you cant use that power yet, but AMD and Adobe is working together and in near future this option might be better. Look at this video for sneak peek.

http://youtu.be/rWXhD27NcuU 

The greatest downfall of AMD's is really the key as to why they are so much cheaper, less die space is dedicated to L1 cache (L1 cache is expensive in terms of money and die space) and what there is of an L1 cache suffers from cache contention because it's shared. To illustrate say a CPU requests data from the L1 100 times. With a latency of 1ns and a 100% hit rate, it would then perform the operation in 100ns but if L1 has a hit rate of 99% and the data the CPU actually needs for its 100th access is sitting in L2, with a 10-cycle (10ns) access latency meaning it takes the CPU 99ns to perform the first 99 reads and 10ns to perform the 100th making 1% reduction in hit rate will slow the CPU down by 10%. However most L1 cache has a 95- 97% hit rate making that delay even greater and thats if the data is even in the L2. If the data is in main memory it could take upwards of 100ns to retrieve effectively halving your performance.

I'm not an intel fanboy, I have a 9590 in my machine any use their APUs for all my HTPC builds. But AMD needs to change this. If the AMD FX series had the cache setup of the Intel i7's they would be insane...but they don't...so they're not...but they are good and they are the right price.

AMD fan boy here.

Intel is faster and more enthusiasts. And more professional. 

I'm doing sstuff I'm 3dsmax and the FX would be much slower for the task...bleh

Simply how they are made...They are still good for many things,  but Intel has the upper (more expensive) hand.

L1 is not shared. L2 is shared

Yes Intel CPU's have higher IPC, but not as much as y'all think.

Problem is that Windows handicaps AMD CPU's and GPU's, and not only Windows, but also other closed source archaic software like Adobe Lightroom and Photoshop, and even Flash Player.

In linux, these problems are not present. On the contrary, the full feature set of AMD chipsets and tight mandatory feature set of AMD chipset mobos, provide a flawless and full-featured experience in linux, which uses a lot more modern features than software consoles and the entertainment software thereon.

Serious raytracing applications in linux for instance, use OpenCL acceleration of AMD GP-GPU's and equal load balancing over all cores (can't be done when counting in "threads").

The fact that the AMD CPU's are a bit older, makes then particularly tried and tested in terms of power management and control.

In linux, encryption applications like LUKS (most experienced linux users de facto encrypt their partitions), do not use the RNG's or encryption microcode of Intel CPU's, because they've been proven spiked and untrustworthy. This is not the case for AMD CPU's, and those are very efficient in decrypting, but also for instance in transcoding, two things that are very much needed. The transcoding part of Intel is a joke, it hardly works, and if it works, it seriously sucks, the QuickSync video acceleration feature of Intel has the worst quality deterioration and the worst performance, IF it works, which most of the time, is not the case. When using encryption, Intel systems, that have to encrypt and decrypt everything in application software because they can't be trusted, are really bottlenecking to and from storage because of that, whereas AMD flies in real-time en-/decryption, because it is accelerated by the CPU and - insofar an OpenCL compatible GP-GPU is used in linux - also by the GPU, whereby the GPU can directly access the system memory and vice versa, a trick that is simply not stable on Intel machines, most of which run on mobos that even block that feature entirely because of "gaming enhancements" that are de facto performance handicaps sold for a premium.

Intel CPU's are faster in single threaded performance, no doubt about that, and they have a smaller litho so they are more power efficient, but they also make customers pay extra to do their beta-testing, and they've brought out some serious crap recently, like the Haswell V1 series, with the faulty TIM under the spreader and the badly placed power regulation SMD components on the flipside of the die. These are the kind of obvious design faults that a customer that pays extra for the name "Intel", should be able to expect not being present. If one pays premium for advertised extra quality, one should not have to settle with products with obvious design faults in my opinion. Mind you, AMD also makes similar faults, like with the Bulldozer FX chips, which were not properly designed and didn't perform as expected, but at least they were cheap, and they worked more than decently enough for the price, and can still provide a very good experience to the customers for years to come, which isn't the case with Haswell, just like it wasn't the case with Pentium D, because the hardware suffers way too much from the design flaws to provide the parts with a long lifespan that reflects the premium price.

Open source programs like Digikam or Darktable render previews and slideshows much faster than Lightroom for instance, because Lightroom uses no acceleration technology. Same goes for GIMP versus Photoshop for instance, Photoshop uses no acceleration technology. If "productivity" is defined by the use of archaic closed source entertainment plug-ins for a software console (like is the case with Adobe CS software on MS-Windows), then the one that defines this terms like that, obviously has never tasted the huge modern performance and reliability of professional open source applications on a bleeding edge linux install. Because once you install linux on a modern system, and give it the same chance you give your software console, there is no going back. Especially on AMD machines, that perform much closer to (and often better) than Intel machines in linux, especially in certain applications where linux can flex its load balancing muscles. If you've experienced both AMD and Intel on linux and with open source software, you know that Anandtech is an ad prostitute with Intel and nVidia on top of the list in the little black book, and that the "superiority" of Intel CPU's in terms of performance, insofar used for real use and not just for idling, is wildly exaggerated.

Intel

i5's do as well, not just the i7's.

i keep asking myself, "where would AMD high end CPUs be right now if they stopped binning/re-branding the 8320 (marketing/production), and just put all that money and man-power on a newer platform?...."

a little off topic, but i am just wondering.... Intel won't see consumer side DDR4 on the normal platforms till Skylake, which won't come out till 2016 (i assume), so i wonder if AMD has a secret platform they will be releasing next year that will use DDR4, and beat intel to the standard market.

Zoltan, that is pretty interesting stuff,

My curent machine runs a dual boot, windows/linuw, windows for photo editing software (Lightroom, CS, Silverfast) and Linux for your everyday tastks and occasional GIMP'ing.

Would there be any benefit running a virtual Windows machine in Linux towards the AMD CPU?

Are there even any benchmarks out there run on Linux?
As stated before, I have not got the first clue in how a CPU exactly works and why X is faster/better than Y, but i'm learning..

 

L1 is shared on Piledriver, Bulldozer and Steamroller.