Budget GPU server ideas

I’m looking to get a few GPU servers into a colocation space - machines that will house at least two 4090s each. The GPUs will be power-limited to 250 or 300 watts. Second generation Epyc and DDR4 are quite attractively priced on the used market, and so I’m thinking Epyc 7302P and 128 GB of DDR4 in Supermicro H12SSL-i or Asrock Rack ROMED8-2T motherboards. “Budget” can mean different things in different situations of course, but nodes like this could come in around 6000 USD with two 4090s and the case and PSU mentioned in the next paragraph, which is a comfortable target for this project (though lower cost is obviously better).

My main question is what to house these in? I do want redundant power. My first thought was a 4U Sliger case and an FSP group twin power supply (Wendell used one for the GKH build), but I have a couple concerns about going this route:

  • 4090s want at least three connected 8-pin PCIe power connectors, and the FSP group PSU only has a total of four. I could adapt the second CPU power connector to 2x PCIe connectors and split that across both cards, but… safe? sketchiness factor uncertain.

  • The sliger case and FSP group power supply together cost about $970, which is perhaps enough to consider a proper server chassis from someone like supermicro or gigabyte. I’ve looked around a bit, but can’t find anything competative. Does anyone know where I might look?

  • Are there other systems which I don’t know about that might offer a competitive performance at a similar cost (or house more GPUs at a proportionally higher cost)? There are some LGA 3647 (and LGA 2011) servers on the used market which were designed to house large numbers of passive GPUs, but I’m unsure if it’s worth going with LGA 3647 when Epyc Rome is so affordable.

Used gpu servers usually go cheap if you are o.k. with that. I have seen Epyc 7001 based 8x gpu servers as low as 500$.

The problem is that the application we have in mind really benefits from ada lovelace architecture (or at least ampere) and you can’t get 4090s into those gigabyte 2U servers. If I could use P40s, this would be an amazing option but given the need for bfloat16 tensor cores, there just isn’t really a datacenter card that is priced in any way that makes sense…

the 4u vertical style ones pop up occasionally also if you are not in a hurry. If you are time constrained then your planned build is a good option. 4090s in a rack design follow this standard:
OIP2

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.