Small scale LLM/AI system recommendations

I am think of spinning up a LLM for my own personal use at home to make use of my gaming PC more. Should I go with llama or deepseek before it gets banned or something else?
Here is my current system, thinking of upgrading to a 3090 or a 6900xt or a 7900xt for my vram

Ryzen 7 570px3d
48gb ram @ 3000mhz
1tb Samsung 980 pro
2tb Samsung 970 Evo
Rtx 3080 Ti.

Any resources for getting that started and what to do to make your own personal AI would be greatly appreciated and any feedback on what to do with my system would be great too. Thank you

I would recommend trying out Ollama and run a small parameter model like deepseek. (all of this is free). You can run openwebui in a docker container for a solid front end experience. Later on for development you can also access a local api for the model if I remember correctly.

https://ollama.com/ (install this software)
library (models are here)

I’m thinking of picking up a powerful mac mini soon to run more docker containers on, you can later expand them to multiple mac minis and connect them with thunderbolt. ($599 is a nice entry point)

As Sean said just download Ollama and OpenWebUI, and go crazy with it. Try all the models you want and don’t think too much about it. The worst it could happen is you don’t have enough VRAM and the model won’t run.

I suggest the tutorials by NetworkChuck on Youtube. Easy to follow and working.

1 Like