The 4x AMD Radeon AI Pro R9700 128gb Giveaway

Hi, I work in writing engineering software. I would use larger graphics cards for training physics informed models.

I’ve been a software developer for 25 years with a strong focus in games - my current day job is working with LangGraph to make AI agents for patient analysis in my medical sales company’s enterprise monolith. These range from generating policy rule code for OPA servers to chat bots that staff use to sell our products and validate orders.

That’s not my project however. I’m working on a space economy game that has dwarf fortress like world generation using LLM tech on the users machine. It uses Ollama to host local Qwen 3 4B Instruct and using native tools (and as big of a context window as they can handle) generates alien species, their backstories, relationships, factions, companies and everything else that would be considered ā€œflavor.ā€ This is then fed into a traditional ECS game engine in Bevy rust and combined with traditional procedural generation tools like blue noise generation and Poisson discs for laying out the map. Once the game is running it will then generate long term strategies for companies to use to compete against you and each other that get updated in real time. This will also include some statistical modelling that will leverage the GPU and pytorch neural net goodness.

I’m currently a couple months into the project and have the map, the UI and a few other pieces built but the main pull has been getting the inference infrastructure packaged in and the galaxy generation engine built. These are all done at this point. I want to make it so this runs on all commercial GPUs with 6GB+ of vram or so (as that’s the models min operating stats when fully optimized). I would like to get an AMD card and preferably this one to test the full context length model against at some point. While Ollama support ROCM, this testing has full spectrum implications to the design of the game. As such having the ability to test ROCM support early is crucial.

Which brings me to the place I can help a bit having worked with Qwen 3 and ran smack into the KV size as a result. The issue you ran into with the spillover to the third card was (and this is still a guess as I didn’t zoom in to look at settings) because of the KV activation and thus cache for Qwen 3 series models being enormous. Even with the 4B model (which is exponentially smaller) and 8 Bit KV caching - weights only account for <20% of a full context window. I’m using long contexts to be able to generate a novella of data about a given species and put that into the next species so they can be related to each other historically for example. So when you load up the 80B model you can expect ~45GB of weights at Q4. Then add on 1MB or so per token in the max context so if that’s 8192 about 8GB if it’s thats 32,000 context you easily end up at 32GB and thus spill over to the next card immediately. You would struggle to get more than about 64k-80k max context on that model with FP16 KV cache weights even with 4 cards. Most turnkey solutions can’t do KV Cache at 8bit much less 4bit so that would be the ā€œlikely culpritā€ from my standpoint.

1 Like

I run a YouTube channel (GreenMinusBlue) with 700+ Linux gaming benchmark videos — all recorded at 6K resolution on CachyOS, Fedora, Kubuntu, etc. Using an RX 6900XT, 7900 XT and Apple Pro Display XDR. It’s one of the only resources for high-resolution Linux gaming performance data at this scale.

An RX 9700 would let me:

  • Provide day-one Linux benchmarks for the new RDNA 4 architecture
  • Run direct 7900 XT vs 9700 comparisons at 4K,6K,8K,12K and beyond
  • Test Mesa driver maturity on new AMD silicon
  • Expand the only large-scale 6K Linux benchmark dataset that exists

Nobody else is doing this. The Linux gaming community has almost no high-resolution benchmark data — most content stops at 1080p/1440p and 4K. The few Linux gaming channels that exist don’t test at 6K and beyond

I’ll put it to work immediately and publicly.

1 Like

https://forum.level1techs.com/t/dwarf-fortress-in-space-with-ai/243245

I made a post because I wanted to yap a bit more about the actual game part and because them’s the rules.

Here’s my contribution. I had a lot of fun writing this up and looking back over my work. Hopefully it interests a couple folks :laughing:

Lots more fun pictures in the blog posts. Hope you enjoy.

Hello, people on the internet! It’s a bit long and unorganized, but here is my submission. Thank you.

1 Like

TL;DR: Why This Library Needs A R9700 (I Promise Not to Use It for Overdue Fines) – Feel free to read the full write up

Hey Level1Techs community!

I’m the IT Director @ the Rochester Hills Public Library in Michigan. Yes, a public library is entering this giveaway. No, I’m not using AI to build some annoying recommending chatbot promoting novels you don’t want to read…

Bit About Us:
āˆ™ We are a team of two supporting 120+ employees ( and what feels like endless volunteers) for the largest service population in the second largest county in Michigan.
āˆ™ Ditched Microsoft on-premise 2 years ago (best breakup ever). Staff all run Chrome OS Flex on refurbished Dells that couldn’t handle Windows 11. SUSTAINABILITY!
āˆ™ Currently have my personal franken-GPU (7900 XTX with manual fan controller) literally under my desk that I’m not confident leaving it on 24/7

What We Are Actually Doing:
āˆ™ Built a local AI assistant so librarians can write SQL queries in plain English (turns out ā€œshow me books we should weedā€ is easier than SELECT statements)
āˆ™ Using Qwen to troubleshoot and write security detection rules
āˆ™ Developed a comprehensive AI policy with our library director and legal team BEFORE deploying anything (apparently that’s not the norm)
āˆ™ Planning bibliographic data cleanup for our item collection

Why We Need the R9700:
Because the ā€œIT Director’s personal GPU with sketchy fan curvesā€ sitting in on an egg carton crate isn’t really production-grade infrastructure for a public library. Also, 32GB VRAM means we can potentially run the models we want without playing ā€œwill it fit?ā€

Let us know if you are doing any projects like this as well! We’d love to see how other public institutions are leveraging local models for their workflows.

2 Likes

This is a project to create a micro OS kernel. To horizontally scale as compute function and integrate with cuda to create simulation software/env for modelling scientific experiments at extreme scale.

https://forum.level1techs.com/t/custom-os/243265?u=guywithtek

I’m excited to see what use cases the 2 R9700s will get, but I’m throwing my hat in the ring for a build of a rig that will be used to analyze radio astronomy data, as well as potentially see about some form of radio astronomy toolchain development if ROCm is suitable for this kind of data processing!

I made a small blog regardless of the outcome, and I will try and post updates when funding and time allows, here:

Take a look! Let me know what you think. Any comments/feedback on the idea is welcome. Best of luck to everyone!

My submission, AI Medical Records Processing.

2 Likes

I don’t have a project but have a friend who might.

AMD 9700 Giveaway - Communities / Blog - Level1Techs Forums

whoopsiedaisy. forgot to make the project description in the blog section and then link here. Local AI-enabled handwriting transcription/search/annotation system for tablets

Wendell will you allow people from other countries than America to participate? I am Dutch and I would really like to throw my hat in the especially because I can’t continue my research project without them. I think my project is really interesting, otherwise I wouldn’t have spend so much time on it. I really hope you would allow me to participate!

Hi Wendell

Firstly, have been following your channel for several years now - keep up the great work.
BTW I am in Australia, so not sure if this is USA only or not.

Anyway, my project is: To find a way of generating an retirement income using AI - the long term goal is to provide a service which will be of benefit to mankind.

Overview, I turn 60 next year, and I need to find a way of earning income that is less stressful and more inspiring than my current dead-end job. I have been working ComfyUI, InvokeAI, KoboldAI, LM Studio and SIllyTavern learning how to push the models, the prompts, and LLM’s till they break. My problem is that my RX9700XT, as good as it is, it hitting VRAM issues (and yes I have been optimising for minimal VRAM usage.).

You might be asking ā€œThis Guy Doesn’t seem like he has experience to make anything of this give awayā€. However, I was front seat in the OS wars, pushing OS/2 / eComstattion / Linux against Windows. I’m currently running CachyOS. I was writing about these systems for PC User Magazine in Australia.

I have average programming skills in several languages, have in the past run my own PC consulting business etc etc etc. Basically you will be providing me with a chance to push what I know further, and put me back on the path of a skill set I have always been passionate about.

I know I am up against other worthy projects in this list, and as I mentioned I am Australia, so might not even qualify. I will leave it to you and your fair judgement. Whoever you pick I am sure will be well deserved.

Cheers

This is my story which lead my here, I very enjoying about AI when I still in college and this giveaway just make remember that time.

1 Like

My thread is here:

I would very much like to be able to run some bigger models locally, and would love the chance at winning.

Project - Getting students in STEM (in particular Engineering and Physics) usage in AI, to improve student retention and success. So many student drop out of engineering or STEM subjects in their first year of university, many more never select STEM even though they have an aptitude and talent. AI may be part of the solution to helping address this problem. Using AI as a tutor, AI created learning resources, using AI to analyse student data, etc.

Its a small part of my PhD, I do it part time, I used to work at university teaching Engineering, now after being made redundant, I teach High School junior science and Physics at a public high school in western Sydney.
https://www.linkedin.com/in/ben-kelley-b19bb927/

Main inferencing machine (currently in build process, I am waiting for a CPU to arrive). I built this out of my redundancy pay.

  • 8 x Mi50 32Gb
  • Eypc 7532
  • 512Gb ram

Secondary inferencing machine/training machine 3 x 5060Tis in a Lenovo P920. I want to move this machine on, and build another Eypc but this time with 4 x 32Gb 9700 Pros. Once these are setup I can look at having them accessible to other researchers and students.

I have wide range of interests and now I am experimenting with using multiple ai for arbitration, continuous state, ongoing real time learning. Basically using LLMs as a component of an overall system of intelligence rather than just a chatbot. For that I need multiple boxes running multiple models. I code (badly?!), and try to make the path easier for those behind to follow.

If Anyone from AMD (or elsewhere) is out there an has old 7003 era Eypc CPUs, throw them my way! It’s been great to see how AMD has grown significantly into the AI space, and cards like the Mi50 are now hero’s to the low cost AI researcher. Its through products like that AMD gains huge community support and shifts entire industries.

Its been great to read everyone’s projects, its a real shame with the memory crisis and GPU crisis, struggling researchers, students, enthusiasts, people who struggle to get even a single old GPU how impossible projects will be going forward. That the entire field may be starved of young researchers going forward.

1 Like

yeah, international shipping is fine however if the shipping is restricted or problematic we reserve the right to just send the cash equivalent instead

4 Likes

I’m working on a long-running personal project focused on AI-driven market analysis and prediction, where models continuously evolve through automated feature discovery, selection, and evaluation. The system is designed to test large numbers of combinations and adapt based on real-world performance rather than static accuracy metrics.

An R9700 would let me push much further into GPU-accelerated training, parallel model evaluation, and larger search spaces that aren’t practical on my current hardware. It would significantly reduce iteration time and make it possible to run more complex experiments that are currently limited by compute and memory constraints.

Project details are intentionally kept high-level here, but this is an active, hands-on build using real data, real constraints, and a strong focus on efficiency, automation, and long-term experimentation.