Thoughts on Radeon VII end of life rumor

Rumors circulated that Radeon 7 was reaching its end of life. It sounds like AMD didn’t officially deny this.

From tom’s hardware:

Blockquote
We continue to see strong availability of Radeon VII in the channel for both gamers and creators.

Blockquote

I had been waiting to see if the card would go on sale, what navi performance would be like, and to see some conclusive tests.

My editing and color correcting program of choice is Davinci Resolve so Puget systems review of the 7 on resolve was one of the best tests I’ve seen. Standard candle benchmarks show it to be a winner as well. In Resolve the 7 actually seems like a value proposition next to the 2080ti and titan.

Now with the availability of the card suspect, I’m thinking of picking one up before their out of stock everywhere. My local stores are pretty low or out of stock. Newegg is out of stock. I’m somewhat worried they won’t come back in stock.

What are peoples thoughts on the future of the 7, and if there will be another card with greater than 8Gb of VRAM in the future of the Radeon line up? I wonder if Radeon 7 will be a one time wonder.

Glad I picked up an open box one a couple weeks ago. With the high cost of HBM2 I had a feeling R7 wasn’t going to be around much longer after Navi released, looks like I was correct. I’m sad to see it go personally. I think its a great card, unparalleled in compute workloads for the price. There’s a ton of hate that surrounds the R7, typically gamers that only care about fps. Maybe AMD just made a mistake marketing it as a gamer gfx, I think it would’ve served better as a titan like card.

Anyway, there will be something to replace it eventually, but no timeline as of yet. Rumored as big Navi, i’m certain it will give better fps then the R7 but will it have 16gb of vram? doubtful and it won’t have hbm2. Compute performance for some workloads is likely to suffer. Like you stated, the R7 may just be a one time wonder… At least for a couple gfx generations.

I’d agree about the gaming crowd put the dent into R7, the marketing by AMD didn’t help as they pushed a level of hype but also R7 did perform on par or slightly better than a comparable GeForce. One can hope R7 or a future replacement is re-used/tweaked for Radeon Pro side as it had showed some interesting performance gains in some areas.

From a consumer point of view I don’t think you’ll see cards with more than 8GB being mainstream as it would be considered “high-end” and AMD had to deal with the growth of RTX being more popular for AI/Machine Learning–compute on Radeon takes a performance loss if your task/project was CUDA specific leaning. What hurt AMD was they tried to make a GPU which covered consumer(gaming) and prosumer(creative/workstation), it has its own set of pros/cons and the power usage factor wasn’t exactly good PR. Radeon 7 should have just been marketed as a Radeon Pro series platform.

1 Like

Not surprising. Even if they don’t officially EOL I’m sure you will see production winding down or retailers not bothering to stock it anymore.

Rumors are that initial numbers were extremely low. Radeon VII likely only exists because Navi got delayed and AMD had to release something to keep investors and the like happy and show they had some ability to compete on the high end. It was a bodged together card made from salvaged Instinct dies from the start. Coupled with the hugely expensive HBM (16GB purely for bandwidth reasons) it was likely that AMD, if not losing money on every card sold, was barely scraping by. AMD honestly got lucky nVidia lost their mind with Turing’s pricing. No way they could have sold that card at all if they couldn’t compare it to the 2080’s pricing model.

AMD really shot themselves in the foot marketing it as a gaming card. It most certainly is not a gaming card despite what the fanboys say. (“Muh 16GB VRAMMMMM for da 4KKKK 2080 will be obsolete in a year cus 8gb lul”- r/amd)

However, AMD really didn’t have that much of a choice. It was a niche card from the beginning and limiting it to professional use would drive it into an even smaller niche. Especially given the dominance of CUDA in most pro applications, its gimped double precision, and the massive dumpster fire that was Vega FE, not to mention it could impact Instinct sales. It would be a tough sell for the pro/workstation market too. They tried to spin it both ways and it flopped.

If you need a GPU with 16GB of VRAM and it is a good value for your needs then go for it.

Most likely yes. VRAM requirements are going to be growing all the time. I’m sure in a few years 10+GB will be the norm on consumer GPUs. But not for the foreseeable future on cheaper consumer grade products. If you need that you will likely need to get a Radeon Pro or Quadro GPU and shell out the extra cash

Likely yes and honestly I, and you too, should hope it is. Radeon VII was a panicked, rushed, hacked out card. While good for a particular niche, its mere existence is a great example of how far Radeon has fallen. I really hope they can get their act together. They need a Zen moment for their GPUs. We shall see. Thus far Navi isn’t inspiring confidence. Maybe, ironically, Intel will save the day

1 Like

AMD could always produce a new Radeon VII GPU with 4 HBM2 stacks but replace the Vega GCN core architecture with a new Navi RDNA based architecture. It should be good for both gaming and compute. The Power Consumption of those new Navi RX5700XT GPU’s is similar to RTX2070 Super in rasterised gaming. HBM2 memory controller requires less power than GDDR6.

Radeon VII is like the Fury.

I suspect it will co-exist alongside Navi cards for some time, as it offers much better performance in “some stuff” much like the Fury did vs. the R9 390X.

It also offers 16 GB of HBM2 (and more than double the bandwidth) which Navi does not. If you have a workload that needs those things, Navi is a non-starter at the moment.

If you’re a gamer though then Navi is clearly way better bang for buck. But not everyone is interested in gaming performance only.

For what its worth i have been keeping an eye on Radeon VII stock locally and haven’t seen it disappearing yet… but thats mostly out of curiosity as i plan to just put vega on water instead.

Same response as for this…

Not unless they bump up the Navi CU count (which i suspect they very much will do in short order - we haven’t seen big navi yet is my guess - 40 CUs may just be fairly broken 64 CU dies they’re selling for cheap). Because right now, vs. Radeon VII, Navi isn’t just memory limited, it’s CU compute performance is just over HALF.

It may be almost competitive with Radeon VII on gaming in some instances but in compute it gets crushed.

Just stopped by to say this.

2 Likes

Maybe but Navi’s power consumption is already pretty bad. I don’t see how they could bump up the CU count and keep power within reason.

They could but it would still be stupidly expensive and largely pointless for most people. HBM is pricey. Really the only reason AMD needed to use it was:

and GDDR5… AMD’s GPUs are so hilariously power hungry that they kind of had to use much more efficient HBM to keep total board power under control. GCN needs memory bandwidth. The bus and the clocks required to feed something like Vega 64 (which is still memory limited) would be massive and suck all the electricity.

Yeah… on a significantly more power efficient node and it is still slower… That isn’t impressive. Shrink a 2070 super to 7nm and watch it use half the power of Navi and beat it even more in performance. It’s Performance per watt is better than Vega for sure but not that amazing compared to nVidia

I keep seeing this, but it is actually pretty comparable to nvidia at equivalent (rendering) power level. It has a dual 8 pin but it isn’t consuming nearly as much as people are whining about.

Maybe not directly compared to nVidia but for the performance, die size and superior manufacturing process it is power hungry yes. nVidia would use significantly less power on 7nm.

My point was more that:
300ish watts is about the top end limit for a GPU that is reasonable. Navi currently is around 225-250. I don’t know how many more CUs, how much more memory bus, ect they could cram into it and keep it at 300 watts.

Sure.

But as a buyable product - Nvidia are not on 7nm today, whilst AMD is. For the customer, 7/12/14/16nm is irrelevant. It is what i can buy that matters.

300 watts for a high end GPU is totally reasonable. Go look up how much power the nvidia high end uses when pushed. Because that’s the sort of processing you’d be getting out of 64 CU navi at least.

And they could fit 64 CUs in 150 watts if they lowered clock enough. How do i know this? Because i’ve had vega 64 running in 130 watts whilst mining pretty effectively… consumption goes up drastically with clock speed as you get close to the edge, so fitting more CUs in there wouldn’t be too hard if they shaved clock 10% (which again, nvidia does at the high end… titan clocks aren’t as high as say 1080 for example), which would give a lot more than 10% headroom in terms of watts for more CUs.

Sure you wouldn’t (normally) actually run a 64 CU card at speeds low enough for 150 watts (because 64 CU card buyers want max performance within say, 300 watts), but you COULD. Which is my point - fitting the CUs in a given power budget is merely a function of clock speed… and power consumption goes up exponentially with clock. Whereas it does not scale exponentially with CU count. Less than linear, as there’s already the fixed costs for RAM, board power, etc.

1 Like

I am actually considering selling my 2080ti and getting a Radeon VII. On Linux, Nvidia GPU rendered textures exhibit artifacts, you see this especially in Shadow of the Tomb Raider under Steam Proton and other games running through WINE and DXVK. In some games like Shadow of the Tomb Raider this is more pronounced than others. I don’t see this on AMD GPUs. I am only having doubts because I use VFIO/GPU passthrough on my systems and not such if Vega 20 and Vega 10 will be recognized as different GPUs. Guess I could wait to see what AMD announce next.

I also find that Nvidia’s pro drivers, while easier to install on Linux than AMD’s pro drivers, are generally older than the drivers you need for games. With the Radeon VII and AMD more generally, you can use the Kernel driver install Mesa and ROCm components on top of it.

As for what AMD will replace the Radeon VII with, I agree with @colesdav it will likely be a similarly priced Navi 20 part with HBM2 as well. If the RX 5700XT is a mid-tier GPU and costs ~$400 to 450 (anniversary edition), then we can expect there to be at least 1 or 2 more SKUs above that and it will likely have 12 to 16GB (I am thinking along the lines of Vega 56, Vega 64, Vega 64 LC, and Vega Frontier above Polaris).

Also keep in mind @DerKrieger, AMD is producing Radeon Pro Vega II and Duo GPUs for Apple and 16GB Vega 56 or Vega Frontier Edition like GPUs for Google Stadia. Stadia will be using a lot of GPUs meaning HBM2 prices will probably drop. HBM is where GPUs and even 3D stacked APUs are headed. So prices may not be as bad as we may think they will be based on the past.

By the time all is said and done, you may be looking at AMD releasing another $700 GPU that is specifically targeted to where the Vega Frontier Edition was targeted. AMD will definitely release a GPU that’s faster than the RTX 2080ti (possibly even the Titan RTX) this generation.

The more I have learned about GPU physics these past few years, the more I am not convinced Nvidia’s 7nm will be as powerful as people believe they will be. They changed the fan design for a reason and with the quantum tunnelling among other things from going down a node. There will be performance gains for sure but not convinced it will be ground breaking. We also have to remember it has taken Nvidia almost a year to get reliable yields on the RTX 2000 series and they didn’t sell as well and now with AMD releasing the RX 5700 and out manoeuvring Nvidia on pricing even the RTX Super cards aren’t going to sell as well. The RTX 2080 Super at this stage makes no sense at the price point unless it comes very close to the RTX 2080ti. Even then AMD will respond with the RX 5800XT

By going down to 7nm EUV Nvidia are also going to be facing higher costs which mean they will still be more expensive and yields will not be as high first time round. AMD by then will be on their second generation of RDNA, third generation of 7nm GPUs, and pushing out APUs that will probably make discrete GPUs under an RX580 pointless. We’ll see just how powerful AMD can make their APUs when we start seeing the PS5 and new XBOX.

Basically, AMD with Navi are squeezing Nvidia from both ends. The fast AMD can make their APUs especially with PCIe Gen 4, the less room there will be at the low end PC and OEM market. Nvidia will have to focus on the mid-tier to high end and they will probably be charging more as they continue to maintain large die sizes on their GPUs. Nvidia will also need to rethink DLSS and also reengineer their RT and Tensor Cores once consoles start releasing ray traced games, if these games don’t use the ray tracing approach Nvidia have (and I’m thinking gameworks, even G-Sync).

2 Likes

I agree with the points you make and I am aware that one reason AMD went with HBM in the first place was to reduce power consumption on the memory controller.

Sure a new Navi Based Radeon VII would need more CUs.

Note that new 2GB HBM2 modules are now in the latest spec, and I think they are getting manufactured now. So it could be an option to have 4 of 2GB HBM2 for gaming GPU to give 8GB capacity to reduce cost but still have high memory bandwidth like on Radeon 7. Keep the 4 x HBM2 memory bus width to keep high bandwidth but reduce the “depth”. I do not know if HBCC is still on RX5700/XT, but if it can work with a high speed NVMe SSD over PCIe4.0 bus, that might compensate for reduced VRAM?

1 Like