ok, so I first state I haven't read all posts.
I shall explain why amd GPU's most of the time do not perform on par with their hardware capabilities.
The games are made on base of nvidia gpu model. What does it mean?
The engines are written in a way to enhances shortcuts dx/opengl extensions to accomplish certain effects like lets say lightning etc. Basically it allows NV to render some things cheaper, and faster. AMD does not invest in those extensions (they used to play that game long time ago with NV) since GCN they reached to allow programmers create those extensions themselves by opening their architecture and pushing for global standard of open API.
Hardware design on engines:
Hardware (NV) design of games, and some engines dictates it doesn't have enough memory and bandwidth. Thus you use as few commands to as possible to draw certain things.
While on other hand its possible to optimize games for high memory bandwidth ~ raw performance.
AMD needs awesome hackers that will hack this shit out of API's to play on equal terms with NV.
Though its not needed since DX12 and Vulcan are the ones to fix the issue.
In hope engine developers will write their own extensions instead of using pre-written hacks for NV cards.
Its nice to use those existing hacks, but it only benefits NV; while writing your own hacks for vulcan and dx12 on open architecture will benefit open platforms independently ~ going back they will need to play around how their gpu is interpreting the information it gets.
Thats why NV requires DX12.1 ~ 12.1 is a pack of their extensions to reach low level access which they've been utilizing for quite a while now within different API's, where on the other hand AMD didn't...
We will most likely see some improvements in AMD performance as it will finally be more optimized for its raw performance than hacked performance. (Don't get me wrong hacks are not bad, they are good. ~ its not about lying or something ~ its about how GPU will interpret this data, is there a better more efficient way to execute the code? etc, thats why NV wins on most benchmarks ~ it is more efficient interpretation of code)
Lastly, first demo with dx12 shows that 290x (with large OC) wins with 980Ti.
(This may be only temporary due to driver problems) but as it stands 290x owns 980ti at it or at least my 290x beats it.
http://hexus.net/tech/news/graphics/85385-dx12-unreal-engine-4-elemental-demo-download-available/
// There is big misunderstanding where you hear developers speaking about amd drivers being bad ~ they do not mean your graphics system driver. They mean the driver that drives the interpretation of code and gcn engine. Thats the one who's in bad shape. Their extensions, and tips are ancient... not updated since ww2.
Thats the one AMD should dust off, and improve. (its being taken care off)
:)
What else,
Look at intel, they are making majors steps in GPU's. They are gaining maturity and looks like they might enter proper GPU market soon. Intel gpu's drivers are interesting, because they are completely open. Thus many developers from linux etc will work on them; and intel can simply take that and improve further their architecture.