I’m looking to get a few GPU servers into a colocation space - machines that will house at least two 4090s each. The GPUs will be power-limited to 250 or 300 watts. Second generation Epyc and DDR4 are quite attractively priced on the used market, and so I’m thinking Epyc 7302P and 128 GB of DDR4 in Supermicro H12SSL-i or Asrock Rack ROMED8-2T motherboards. “Budget” can mean different things in different situations of course, but nodes like this could come in around 6000 USD with two 4090s and the case and PSU mentioned in the next paragraph, which is a comfortable target for this project (though lower cost is obviously better).
My main question is what to house these in? I do want redundant power. My first thought was a 4U Sliger case and an FSP group twin power supply (Wendell used one for the GKH build), but I have a couple concerns about going this route:
-
4090s want at least three connected 8-pin PCIe power connectors, and the FSP group PSU only has a total of four. I could adapt the second CPU power connector to 2x PCIe connectors and split that across both cards, but… safe? sketchiness factor uncertain.
-
The sliger case and FSP group power supply together cost about $970, which is perhaps enough to consider a proper server chassis from someone like supermicro or gigabyte. I’ve looked around a bit, but can’t find anything competative. Does anyone know where I might look?
-
Are there other systems which I don’t know about that might offer a competitive performance at a similar cost (or house more GPUs at a proportionally higher cost)? There are some LGA 3647 (and LGA 2011) servers on the used market which were designed to house large numbers of passive GPUs, but I’m unsure if it’s worth going with LGA 3647 when Epyc Rome is so affordable.
