After I got my degree in Applied Data Science, I have collected quite a lot of data from ides during this time. I want to apply the data with the local weather API to build/train an model from scratch to predict the weather more accuratly. I do also want to benchmark the R9700 agains my intel arc a770 i currently using to see the difference, also testing different ML frameworks and becnhmark them.
I do also want to play around with larger LLM’s and measure the different accurasies and performance between them
Awesome that your giving away those cards to those who could use them! Personally… i’d forgo the GPU’s and would rather have the rest of that case. At least that would be something i could use since i have neither the plans or hardware to make (proper) use of even one of those GPU’s.
Apologies for the noise, and i wish you strength(?) in picking the destinations of those cards! That ain’t going to be easy from the looks of it. And of course good luck to all those aiming to get their hands on one of those cards.
I’m not really sure if I’d end up with the 9700 in my main desktop alongside my 3090 (CUDA + ROCm sounds a bit messy), or if I’d end up moving it into my 2nd desktop solely for inference, but I guess I’d need to try it out in order to find out.
One thing that I’m certain is that the extra VRAM would be really welcome.
As a long time forum member I decided to take part into this contest and with that I have published my idea what I would do with the R9700 in the following blog post …
I started building my homelab this year with truenas and openwrt flint 2.
Asrock x570 pro4 with Pro 4650G 16 gb ecc memory. I would use the GPU for some local LLM and when I saved up for another machine it would like to learn proxmox and have the local AI run there instead.
I’ve created the required project post outlining what I’m building with ARIA OS at ResilientMind AI and how the same hardware lane supports Help-Veterans.org.
The XFX 9700 would allow me to run heavier local AI workloads, expand resiliency and fault-injection testing, and validate real performance in fully self-hosted, offline-first environments without relying on cloud compute.
Appreciate the opportunity and the community here. Hopefully I can also find the time to lend a hand around the forums or contribute in similar ways.
I’d be incredibly grateful for this opportunity. I’m currently running a local AI stack with Ollama (gpt-oss:20b, qwen3:8b) and ComfyUI on a single RTX 5060 Ti 16GB, which has become severely limiting for my recent projects.
Current step is “How can I combine using n8n and ollama and the other services”.
But Final gole was can be call using Bot and Designing FAN and simulate, for my family business factory.
The memory constraint forces constant model swapping—I can barely run inference on a 20B parameter model while keeping ComfyUI loaded, and running larger models is essentially impossible. I’ve been researching the R9700 extensively for the past few weeks, specifically drawn to its 128GB VRAM, which would enable me to:
Run multiple large LLMs concurrently without memory constraints
Serve inference requests about FAN design and simulate for improve development speed (a long-term infrastructure goal)
Drastically improve throughput on AI-driven workloads
Test and develop with state-of-the-art models that currently aren’t viable on my setup
With 4 of these cards, I could build a truly powerful local AI cluster for research and development—something that would have been financially out of reach otherwise. This isn’t just an upgrade; it’s the difference between “what’s theoretically possible” and “what I can actually build.”