Today the Fable benchmarks came up on various sites and they show a completely different picture than the Ashes tests.
The AotS tests brought the great Async Compute shit storm which have not yet settled.
As I was reading the Fable benchmark reviews I could not help but notice that now everywhere we see Fury X and 980 Ti in them with 390X card even matching the non-X Fury (!), and the 980 Ti is beating the hell out of the Fury X, which is surprising, as Fable is said to be using DX12 to the fullest, including lots of Async Compute, so where is the catch, I wondered.
Remember when AotS scandal happened, nVidia said that they are forced to emulate some of the DX12 features that they do not have in hardware?
Here are two different pics from different sites:
Notice how the Fury X results are the same, but the 980 Ti are not? Guess what - the first one is made on 5960X, the second one a regular i7 (presumably 6700K).
What this tells me is that nVidia is offloading work to the CPU through their drivers and gaining FPS while doing so, while AMD is doing everything on the GPU hardware, completely leaving the CPU out.
The bad thing from this is that the average consumer just looks for who's on top, not how it went there, and I think there should be more work put to investigate that deeper.
Discuss.

