The VEGA 56 / 64 Cards Thread! General Discussion

Just because you can RUN the games on the card doesn't make it a GAMING card. It is not designed for gaming, it is not marketed to gamers, it is not designed to be used by gamers.

Yeah, radeon groups marketing is just bizarre, comparing against Titan Xp in Maya. :slight_smile:
Since they are including it in their "Pro" section though if they'd compare it to Quadros instead getting a Quadro class product for only 1K would be actually killer though. That is to say assuming application specific optimized drivers.

So what does? RGB LEDs?

This is Vega. In fact this is the big Vega. The one with all the things. The Titan Vega if you will.
If the performance isn't there on this card because of hardware, it won't be any better on any other Vega card either.

1 Like

Raja time and time and time again has said it is not the gaming card. The card has a gaming mode for developers but in no way is it designed to game. This is a professional card designed for work that is NOT gaming.

Whether it has been engineered for gaming. A Xeon Phi supports OpenGL, so would you call it a gaming card?

The drivers aren't meant for gaming, and the air cooled Pro cards are always slower than the gaming ones by a lot.
For example the Firepro W8100 is the same GPU as a R9 390, but clocked at 810 mhz. The r9 390 is clocked at 1050 mhz ( or slightly more depending on model).
That right there is a 30% performance difference not to mention the fact of the driver differences that increase that by even more.
So don't pass judgement on Vega in gaming yet just because the Workstation grade card doesnt do all that well in gaming- it isnt meant to. Its meant do do OK. And be great at pro uses.

2 Likes

The gaming cards will have HBM. Vega in its current form doesn't have other memory controllers on it..

Also HBM actually uses less power than GDDR5/X

They also need the bandwidth. They don't have the compression tech Nvidia has.

My guess is you will see a 4GB HBM card which uses their HBCC to stream in texture data as needed from system memory. It isn't ideal but prob the best way to keep costs down.

I am not passing judgement, I am just saying how it looks right now. To think RX-Vega would be any different is ... I would say overly optimistic and not based on anything else than marketing speak. I stated multiple times that I had pretty high hopes for Vega but in reality it just simply doesn't look so great.

Look, there is no way the gaming variant could be much faster than the more expensive game dev / game testing version. The backlash would be enormous.

Don't get me wrong, I don't expect it to be much faster either. Somewhere around 10% maybe.

But I can think of a few legitimate reasons why that might not actually be the case.

  • the compute card is probably more focused on power efficiency
  • Gaming specific parts of the GPU may have been cut down. Think rasterizer, their new tiled renderer, texture units and the like.
  • nvidia is adding spcialized tensor hardware to their Volta cards. These are extremely useful for neural networks, but far less so for gaming. AMD may have done the same.

The card was pulling around 280 watts. Doesn't look very power efficient to me.

Again, marketed as game dev and game testing card.
They can't turn stuff off that is needed for games.

I never said it was. Just that it is more efficient.

I forgot about that. Sure doesn't sound like they are focusing on neural networks then either.

In its current form is is also not a card sold for gaming either.

HBM may use less power but placing it on the die concentrates the area of the card that is generating heat. Spreading the heat generation over a wider area will reduce the peak thermal loads that the cooler needs to deal with over a wider area, making it much easier to cool

If you re-read what I said, you will note that I did not say that they would not use HBM at all. I speculated that they may use 4GB of HBM on die and use other memory as a near to die cache to feed the 4gb of HBM and the GPUs immediate needs. A GPU cannot use all 4GB of data at once, let alone 8gb of data, so there is no reason why a 2 tiered frame buffer with intelligent switching could not work.

They have already shown that they have the technology to support that in the polaris workstation cards that they launched with the built in NAND flash. There is nothing that says that a frame buffer has to be populated with memory that is all the same speed

They made the mistake of selling Fury X cards with 4GB frame buffers when the market had already moved to 8GB. It is hurting those cards in 4K performance now so I doubt that AMD will make the same mistake twice

The last time workstation cards came out first... It was:

....
Noone would freak out either lol, its normal for the workstation cards to be significantly lower clocked and lower performance than the gaming ones.
The Pro cards are tuned for power efficiency and pro uses not gaming so a 20-30% improvement for a gaming card on the same core wouldnt surprise me at all.

We just have to wait for its official launch,
And then we will see benchmarks from more reputable sources to compair.

1 Like

Something interesting that has come out of this was some testing I saw on the AMD Subreddit that makes it appear as though Vega isn't using tile based rendering/rasterizing as they said it was going to....

Hmm. Will need more testing to see.

But honestly now this isn't impressive. This is worse than Fiji at the same clocks. Something isn't right

I already explained why this does not make sense.


:open_mouth: If that is true then there is still hope. That would make a huge difference.

Meh. Tile based rendering is not as exciting as Raja is trying to make you believe. Games are going to great lengths to ensure hidden objects are not rendered, so there's not too much the GPU can do here. Besides the main improvent over immediate rasterization is reduced bandwidth, which is probably not overly helpful on a card with HBM2.

If they fucked up the driver we should see a fix for that or at least hear about it from AMD in a couple days or maybe a week. Because that is something game devs would probably be pissed about too.

Umm it actually makes a pretty big difference...

That is where Maxwell get's alot of its huge efficiency jump over Kepler....

Bandwidth yes but also less calls to memory generally which saves power

2 Likes

But maxwell cards have comparatively little memory bandwidth. Nvidia relies more on efficiency, while AMD appears to just try to fix it with faster memory.

I haven't seen any benchmarks specifically testing the impact of tile based rasterization so my knowledge here is purely theoretical. Maybe I'm underestimating the impact, but I don't think so.