Return to

CES 2019 - Consumer Electronics Show



That too, should have been 8 GB of HBM2 and around $500-550.

That’s enough to beat the shit out of NVidia in price/performance at least. Yeah, 8 GB isn’t a lot but come on, there are 4 GB RX 580s, why not an 8 GB “Radeon 7” that’s $550?


Damn, that’s an awesome system.


I assume they have two or 4 stacks of HBM to work arround memory bottlenecks.


At least it looks better on camera than I do.


Nah dude, you clean up well. :kiss:


But couldn’t they just get HBM that clocks better? Seems like it would still cost less money. Or at the very least, shave off 4 GB of HBM and cut the price in $100. You would have a $600 GPU that has 12 GB of HBM.


I posted this in the other thread, but the real story is it’s a binned Vega 20. It was never intended for consumers/gamers, but Nvidia made that possible by not focusing on rasterizing performance and jacking up prices on Turing.

I still wouldn’t buy one. Even though it’s the same price/performance as a 1080ti/2080, it doesn’t have ray-tracing and will undoubtedly run much hotter.


The system looks nice, like really really nice.

One note: There are some jumpcuts where the image zooms in and out, would avoid that.


We’re still all new to this and we had to cut out the part where Joe rambled on about how he’s a dissapointment to his wife.


Would have to look for datasheets and get to know the clocks they are running.
But $100 for 4GB HBM is a very high estimate. 30 to 15 is the expected price cut per stack.
( We may have to move to here )


They literally announced the name you don’t have to speculate on what it’ll be called… Also I assume that 4GB stacks aren’t actually much more expensive than 2GB stacks anymore, so AMD decided to provide an actual increase when moving back to 4096-bit rather than stagnating and risking VRAM limitations in future applications (something that Fiji got railed for).


Nope, it’s a binned vega 20 and the instinct MI50 has 16GB of HBM2.


You know what else could have fixed that issue too…GDDR6 or even GDDR5/X.

Vega could run with DDR4, and HBM2, why not GDDR5/X or 6? There was 3 ways they could have kept the GPU price down and 2 of them won’t sacrifice performance much.

I am hoping Navi somehow cannibalizes Vega II because the RTX will eat it alive.


It’s not so simple, they would need to completely redesign the memory controller, and Vega is notoriously memory bandwidth starved as it is. But I agree that if this GPU was intended for consumer/gaming use, they should have gone with GDDR5X or GDDR6.


Okay fair enough but then why release this card period? It doesn’t have any merits in the market it’s competing in over the also overpriced RTX 2080 and I thought there wasn’t going to be 7nm Vega for gaming.

Is it defective version of Radeon Instinct or something? If so how much does that GPU cost?

To AMD’s credit, it is THEIR most powerful gaming GPU without a doubt. They got very close to GTX 1080 Ti performance and surpasses the GTX 1080 most likely. Problem being it isn’t competing with the 1080 but the 2080.


As far as I can tell from specs released so far, it looks to literally be a rebranded instinct Ml50. I haven’t seen prices on the instinct series, but it’s certainly a lot more than $700.

They released it for the two reasons I stated; Nvidia didn’t focus on performance in existing games and Nvidia jacked up pricing to a point where AMD could afford to sell it and make money.

It competes with the GTX1080ti and RTX2080, not the GTX1080.


GDDR6 does not provide a significant bandwidth improvement over HBM2, and would require a substantial redesign of the core and supporting components such as reference PCBs. Vega is extremely bandwidth starved and by simply doubling down on HBM2 they also immediately double bandwidth.

Vega 64 was the competition for the GTX 1080 and traded blows with it handily. Some overclocked Vega 56s can also do the same. Radeon VII is going against the RTX 2080 and in extension the GTX 1080 Ti/Titan Xp. The original GTX 1080 should not be even remotely close to competition for this card.


Someone is a bit… salty.


Well, if they turn on DLSS I suppose that’s true, as you’re effectively only rendering half resolution then upscaling it with super magic AI. And it straight-up doesn’t support raytracing.

The freesync thing is all FUD. The Nvidia booth has a freesync demo showing a non-certified monitor performing like dogshit, flickering, terrible. But we all know freesync works fine on AMD, even on monitors with very limited variable refresh windows, so that’s an issue with Nvidia’s implementation, not the VRR standard.


Hopefully the Radeon VII does actually give the 2080 a run for the money when the AIB boards comes out. Some price drops on both would be nice.