To be honest, I would prefer a H200 card but it seems life has decided not to grant me such benefits.
The R9700 looks like a really nice card with 32Gb vram allowing local running of some larger models that would not fit on the 16Gb cards. I have a XT 7800 and would add this to the setup so I can run AI on this card only while driving screens on the 7800 card.
The prime function would be to edjucate myself more on AI and what it can do. As an older system engineer, tech is evolving faster than I can keep up. Still love tinkering and learn new things and LLMs+python creates endless possabilities for fun
I’d personally want more VRAM (48gb or more) + FP4/6 support but the R9700 is a pretty good GPU for the money.
32GB does allow you to run some great models locally. 64gb (or two GPUs) allows decent acceleration for MoE models but still can’t fit gpt-oss:120B or minimax m2 entirely in VRAM.
The 7800XT should already allow you to run gpt-oss:20B and maybe Devstral/Mistral. You will not notice major improvements there.
You might get some improvements from adding the R9700 but you might want to shortlist the LLMs first before you run out and buy the GPU (you should still get the R9700 regardless given the shortage of PC parts).