Need advice on home server for AI + Docker running AMD + A Good GPU

First post, hopefully this lands well.

I want to build a machine that will allow me to do some if not all of the following using an AMD Ryzen chip (12 to 32 cpus/ 24 to 64 threads maybe…) and and a good GPU. GPU could be Nvidia (better for AI training at the moment) or Radeon. Expecting board to support up to 128GB Ram or more.

The machine should be able to:

  • Must have - Run docker (local AWS Lambda programming).

  • Must have - Train the some of the open source models

  • Run some Jenkins workloads for Android builds or what have you…

  • Like to have - Possibly run TrueNas (have Qnap box now. It’s ok)

For cooling, I don’t want to maintain anything with water if I can help it.

The case is up in the air until I can locate all the parts needed I think.

Let me know your thoughts, suggestions, comments or if this is the right spot for questions on potential builds.

Not trying to go broke but I have a couple coins.

Just as an FYI depending on the qnap box you already have you can side load TruNAS on it.

Hey. Welcome to the forums!

The moment you said “train some models” the potential ceiling to how much you could spend blew off. So can you give us a rough idea of what sort of budget you’re talking? Also, what sort of storage did you have in mind - both speed and capacity. The reason I ask is that the consumer Ryzen CPUs are a little limited with PCI-E lanes (but otherwise great). If you hate money there’s a new Threadripper around the corner I hear.

(It’s a little unfair to make the comment about the price of Threadripper, it’ll likely be reasonable for what it is but it’s obviously a whole different ballpark and likely overkill).

Sorry for the super late reply. I did a little research (watched so Youtube…GN and Level1). I may not need a ThreadRipper (TR board will run probably >$1K alone) but we will see. Budget is $5K ($2K on GPU at most) right now. I have been reading up on how to get the most with the minimum. While I wait for what I think is the processor I’m getting up to speed on using the correct open source AI public software models right now. Moving slow and in no rush.