NVIDIA or AMD for Gaming and AI?

Here’s my conundrum. I have a PC with a Ryzen 5700g and I’m looking for a GPU to build a hypervisor rig.
I was initially going with something like a 6700 xt for the all AMD performance boost but I recently learned that those cards absolutely suck at AI. The 7000 series is supposedly somewhat better due to the AI accelerators, but the documentation can’t agree with itself if cards like the 7600 and 7700 have ROCM support. Not to mention the price difference between the 6700 and the 7700.
I would rather not get a NVIDIA card for multiple reasons but it seems like way more bang for your buck once you factor AI in. Running a hypervisor means their drivers would be limited to specific VMs and not be too much of a downside, though I’m still worried about linux support and problems from running it with an AMD CPU.
With that in mind, for something in the price range of the 6600xt to a 7700xt on sale, which of the two would be better in my case?

RDNA3 is a generation ahead but do they suck that much in comparison? I mean… you are already as a disadvantage from the whole is not CUDA thing but I don’t remember heading that many upgrades on RDNA3.

1 Like

Overall there’s very little difference in every other area, but it’s many times faster at AI tasks in specific due to the AI accelerators.
Though it’s still not as fast as the low end of the RTX series going by the tests from Tom’s Hardware.

1 Like

AMD Radeons are not going to get better at AI anytime soon.

Performance wise, the spec is there. But ecosystem wise, Nvidia has already cornered the market. AMD is currently five times the trouble for half the performance. This is not due to the hardware, but due to everything currently being optimised for Nvidia. The fact that most models run faster on Zluda, a CUDA translation layer to AMD hardware, than it does native, tells you a bit of how bad it is on the AMD side.

That said, AMD is not completely worthless. If you are just going to dabble a bit in AI it can, and a Radeon card will help you get your feet wet at the very least. But if you want to go serious, Nvidia is currently the only serious option in town just like Adobe Photoshop has been the only serious option in town for three decades.

Can you point me those benchmarks? Last time I tried zluda on llama.cpp the performance was similar to the vulkan backend, which is not as fast compared to rocm.

For raw benchmarks, ROCm will be faster than ZLUDA. However, most applications are based on CUDA and are optimized for CUDA, and this is what makes them run faster on ZLUDA than their ROCm counterparts.

The problem is getting developers to optimise for the ROCm codepath, which they will not because no one seriously use ROCm, so people will not use ROCm because it sucks, so developers will not develop for it because noone uses it.

ROCm could be miles ahead CUDA at this point, and it still wouldn’t matter much because everyone will see the benchmarks of tools like blender and go “Oh, a 66% penalty for this unoptimized, barely understood codepath. Well ROCm just sucks I guess”.

@FOBEye I found a YouTuber named Chris Titus who has found green team support ( meaning less fiddling once you have your team green graphics card set up) on Linux less of a headache if you setup any Linux distro without a graphic interface (KDE or Gnome) and just have a Window manager installed.

Cris Titus’s YouTube chanel is called Chris Titus tech, you should check him out.

But again, do you have any examples of zluda running faster than native rocm? I’m genuinely interested in this.

This good enough?

That’s vs OpenCL so not exactly what I had in mind. I was expecting comparison of tokens/s on SD or Llama.cpp between ROCm (HIP) and CUDA through Zluda when you mentioned “most models”.

Like I said last time I tried it was not even close what the native ROCm backend could achieve but more like in the level of the Vulkan backend (which is faster than OpenCL for that project since it only does matrix multiplications).

Thanks anyway.

There isn’t much you can do, Nvidia’s ecosystem is way ahead in this scenario.

I have two Nvidia GPUs running with an AMD CPU on Linux for years now, no issues so far.