R9 Fury X - Anyone think shenanigans are going on against AMD?

I still can't get over the fact that the R9 Fury X can nearly produce 10 Terraflops (8.6 TFlops) and still under-performs as compared to the entire nVidia high-end range which are not even close in comparison to most of the specs ...

... when nVidia are using nearly obsolete technology (HBM - 9 times faster than GDDR5).

It is obvious that there is a noticeable choke when you look at the effective memory clock speed.

But other than that, I can't really comprehend what is holding the card back.

The drivers?
Intel vs Amd?
Some sort of bugs in the bios?
The operating system (I assume someone has thrown together a custom open source driver and used it in Linux, etc) ?
Some cluster of restrictive transistors that become active when certain cpus or signals are detected???

Clearly the true performance of the card is best on paper (much like AMD's cpus and GPU's). I am starting to think over the last 10 years that companies are doing back room deals with Intel and nVidia ever since AMD purchased ATI because this is too obvious...come on, nearly 10 Terraflops!?!

What someone needs to do is throw this GPU on an ARM setup and test it with Windows 10 against these other nVidia GPUs so we can rule out whether the graphics card is the issue, or the OS. Then throw it on a Linux box and test the difference between driver, and OS performance. It sounds extreme, but answers are needed. It's either AMD has been lying to us or something is up.

2 Likes

Things aren't always as good on paper as real life. Also AMD over hyped and delayed the fury x creating the inevitable disappointment.

Also compute performance /= real life gaming performance.

Also while HBM is faster gddr5 it is fast enough for modern cards. Its not like it was holding them back.

You can not compare flop to flop.

It's like comparing horsepower between cars. Pointless because there are way too many more variables.

Haha! HBM is in such short supply it hardly makes GDDR5 obsolete. Yes it's an objectively better teechnology but until production ramps up to take over the market GDDR5 will NOT be obsolete.

Also it's not "9 times faster" Sure it's more than an order of magnitude wider but it runs slower. Effectively HBMgen1 is 512GB/s and the faster GDDR5 memory spec is 384GB/s This is not even twice as fast.

What it is, is a new technology that's just been brought to market. It's currently significantly faster than what we have now, but It has a long way too go.

I'm not tearing down AMD here, just breaking down your post. I own an all AMD system if you want to look at my specs.

1 Like

Compute performance (TFlops and GFlops) are directly related to gaming performance.

Other than that, you really need to explain what you mean to me in detail.

Everyone has compared 'flop to flop' though over the years. It's nearly directly related to FPS when discussing gpus.

Well it is 9 times faster. HBM is in the 1.0 phase and the drivers are new. Also I think you may have your bits and bytes mixed up. The image I provided up above represents byte's to my knowledge and I am pretty sure you mean bits in your statement.

Kinda like MTF charts, you can't compare between manufacturers. Even then things change overt the years.

If Flops and FPS were directly related over the years Why do these cards perform so similarly? GTX680 (~3Tflops) and 7970/280X (~4Tflops)

You'd think there'd be a 33% difference in gaming performance yes? Well that's not how lithography works. Nvidia and AMD chips are designed starkly different. Ever notice how Nvidia seems to edge out in gaming benchmarks but amd edges out in compute benchmarks. GPUs are enormously complex and Flops are not an appropriate way to gauge gaming performance because they each to thing differently, some tasks they compute more efficient than other tasks slower.

I'll fix my quote but my statement stands, It's not close to obsoleteion yet.

No? I was reading off of the last image you linked.

That's what I am referring to. It's an indicator of raw performance. I assume you didn't read my original post entirely. The finality of the post concludes that the card should be tested thoroughly to exclude any variables that have been similar over the years to other tests.

-Graphics involves a lot of floating point math and everything is generally a 3D vector (x, y, z) - all floats or doubles. The vectors get manipulated to draw whatever you see onscreen, so you're talking millions of vectors being added/multiplied per fraction of a second. Theoretically, the more FLOPS a GPU is capable of, the faster it can perform those operations and therefore it can render more frames quicker-

It's nearly there, and that's a matter of opinion of course. I pulled this article for you to have a look at. I had it bookmarked ages ago. It's dated 2014. Not sure if they decided to continue with this transition or not.

I don't understand why you are being snarky in your remarks. You clearly speak like you know the data about this card, then talk as if you don't, as if my image was the problem when the data seems pretty clear. ??

Remember a couple years ago nvidia and amd cards were close in gaming performance but AMD cards were the card of choice of crypto miners. Hashing rates weren't even close between nvidia and amd.

Its like comparing a truck and a car. the car isn't better that the truck because its faster and more fuel efficient. you can't haul cargo or off road in a car.

1 Like

I am just saying there needs to be another more thorough and up to date test of these cards, regardless of lithography and past results. That is what I had written in my original post. Windows 10 works on ARM right? People have seen different results over the years when they put different GPUs combined with different CPU's. ARM is neutral. The same can be said of an open source driver. We all know open source OS's work better with any GPU. So why not try and rule out an operating system like Windows as being a problem on scores, and rule out windows drivers, as well as any bias CPU? It's very dismissive to not want to perform a test like this and to just accept what always has been the case over the years.

1 Like

well Graphs of average scores from 8 games?
I think those graphs ar pretty pointless, we dont know anything about "which" particular games tested,
and we also dont know anything about the test system.
That still makes sense.
So the provided graph is a bit useless.

But i have to agree, that according to most benchmarks and tests "that i have seen",
The 980Ti seems to be the better card overall.

2 Likes

Whoa, lots of assumptions were jumped to in the writing of that post. That's all I have to say about this.

This is an important point... you can also look at a truck which has more horsepower in the engine, but can't go nearly as fast as a normal car can. Apples to oranges.

This is my point exactly. All the data we seen back then was on hardware specific to a certain generation of software...regardless of which games and what type of computer isn't really the point. What is the point is that all these tests need to be redone. Windows 10 takes precedence along with the ARM option for the OS. There would be quite a few new tests.

Well they were assumptions, but clearly one should take them with a grain of salt. It's not very often a gpu is released with next generation hardware, along with a next generation OS a month or so later that has compatibility with an ARM cpu.

ok, so I first state I haven't read all posts.

I shall explain why amd GPU's most of the time do not perform on par with their hardware capabilities.

The games are made on base of nvidia gpu model. What does it mean?

The engines are written in a way to enhances shortcuts dx/opengl extensions to accomplish certain effects like lets say lightning etc. Basically it allows NV to render some things cheaper, and faster. AMD does not invest in those extensions (they used to play that game long time ago with NV) since GCN they reached to allow programmers create those extensions themselves by opening their architecture and pushing for global standard of open API.

Hardware design on engines:
Hardware (NV) design of games, and some engines dictates it doesn't have enough memory and bandwidth. Thus you use as few commands to as possible to draw certain things.

While on other hand its possible to optimize games for high memory bandwidth ~ raw performance.

AMD needs awesome hackers that will hack this shit out of API's to play on equal terms with NV.
Though its not needed since DX12 and Vulcan are the ones to fix the issue.

In hope engine developers will write their own extensions instead of using pre-written hacks for NV cards.

Its nice to use those existing hacks, but it only benefits NV; while writing your own hacks for vulcan and dx12 on open architecture will benefit open platforms independently ~ going back they will need to play around how their gpu is interpreting the information it gets.

Thats why NV requires DX12.1 ~ 12.1 is a pack of their extensions to reach low level access which they've been utilizing for quite a while now within different API's, where on the other hand AMD didn't...

We will most likely see some improvements in AMD performance as it will finally be more optimized for its raw performance than hacked performance. (Don't get me wrong hacks are not bad, they are good. ~ its not about lying or something ~ its about how GPU will interpret this data, is there a better more efficient way to execute the code? etc, thats why NV wins on most benchmarks ~ it is more efficient interpretation of code)

Lastly, first demo with dx12 shows that 290x (with large OC) wins with 980Ti.
(This may be only temporary due to driver problems) but as it stands 290x owns 980ti at it or at least my 290x beats it.

http://hexus.net/tech/news/graphics/85385-dx12-unreal-engine-4-elemental-demo-download-available/

// There is big misunderstanding where you hear developers speaking about amd drivers being bad ~ they do not mean your graphics system driver. They mean the driver that drives the interpretation of code and gcn engine. Thats the one who's in bad shape. Their extensions, and tips are ancient... not updated since ww2.
Thats the one AMD should dust off, and improve. (its being taken care off)
:)

What else,
Look at intel, they are making majors steps in GPU's. They are gaining maturity and looks like they might enter proper GPU market soon. Intel gpu's drivers are interesting, because they are completely open. Thus many developers from linux etc will work on them; and intel can simply take that and improve further their architecture.

3 Likes

Fury X reminds me of the Radeon HD 4870 launch. Everything matches almost to the letter.

Radeon HD 4870

  • First GPU to launch with GDDR5
  • Supported DX10.1 and hardware accelerated tessellation
  • Increased die size to compete, but still undercut nVidia
  • Increased efficiency over previous generation by 40%+
  • Released as a single-die flagship, but performed more solidly in the mid-high range
  • Matured into a killer $/FPS powerhouse with solid drivers and compatibility

Looks all too familiar, so long as you replace DX10.1 with DX12/Vulkan...

1 Like