Best approach for mid-tier AI workstation?

What is the best choice for a mid-tier AI workstation? The 3090 and 4090 are definitely out of reach. My current PC has a 970 and a 4th generation i5, so my thought is to do a pretty comprehensive upgrade, aiming for about $1000.

Coming out of an internship and looking to get into AI to contribute to projects like George Hotz’s tinygrad

Should I avoid AMD? Are there niche cards I should know about?

It seems like the GPU is what really matters here. Let me know if that is incorrect

Why do you think you should avoid AMD? Zen 4 implements AVX512 which I know has AI purposes. Do not know exactly what they are, but it is more than what Intel is offering on the consumer platforms.

@John_Wyatt I think he was talking about GPU’s

@n8r If you are going to develop, a decent CPU is helpful as you’ll be compiling a lot. Which components will you need for the $1000? Is second hand an option? For this budget, AM4 might be an option. A motherboard with two pcie 4x8 slots so you could add a second GPU, a 5900x or 5950x, 64GB of RAM and the rest on the biggest second hand GPU you can afford. On a budget, a second hand nvidia 3060 may be good. It has 12 GB of VRAM which is a good amount for the price.

1 Like

VRAM is one of the most important AI properties for hardware, the more VRAM the bigger the neural networks you can fit. The other important part is how many AI cores exist on the chip.

For that reason, a 7900 XTX at $950 is not a bad consideration for a budget AI workstation, this will be the most compatible card in the price range… But not the fastest, by a wide margin!

Of course, you can also invest in the 4070 Ti, which sports only 16 GB VRAM. If you can stretch to $1100, the 4080 is also an option, but still only 16 GB VRAM.

Last option is a used 3090, which might reach below $1000 on eBay.

If you can find a used 3090, I would go with that, followed by a 7900 XTX - being able to run the same workloads as a 4090, only slower, is probably better than only being able to run some workloads really fast. Otherwise a 4080 is not a bad idea either, as long as you understand 16GB will limit you in this field.

I’d avoid AMD since ROCm still gives you some headaches to get stuff up and running (unless all you want to do is play with stable diffusion and some LLMs that people managed to work on top of ROCm).

Since a 3090 is out of your budget, a 12gb GPU like the 3060, 3080 or 3080ti might be good options.

I wouldn’t really recommend AMD to anyone who is trying to work with ML stuff other than if your actual plan is to tinker with ROCm support and whatnot, it’s still not really on par with Nvidia when it comes to software support and ease of use.

You get what you pay for, but if you want to run 24 GB neural networks for under $1k then XTX is pretty much only option you got, unless you can find a used 3090 or 4090 for that price.

Of course if 16 GB is all you need a 4080 is a no brainer at $1.1k.

Very much agreeing on some of the points above. Going with an AMD will likely have you spending more time troubleshooting how to make it work properly than actually using it.

To be frank, $1000 to upgrade your existing parts into a mid-range AI workstation won’t get you what you’re looking for, depending on what youre planning on doing. By and large the most cost effective option is the 3090, clocking in at 10496 cuda cores and 24gb of VRAM. The advantage it has over the 4000 series is, if you decide to do so in the future, you can pick up a second one and link them together as long as your board supports it.

Otherwise, with your budget, I would suggest looking into subscription services to rent GPU/TPU usage like Google colab instead.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.