Threadripper EATX in a Poweredge t320 case?

Howdy guys. I just recently decided to go with the threadripper pro platform for a server build. I’m running into a little bit of a conundrum… I haven’t really found a server chassis that I like. The supermicro tower/rack cases end up around $500-$700, the sliger rackmount case is currently way behind on production and they only have a sata option anyways, and I believe that a good amount of the cheaper rosewill rebrand cases don’t come with backplanes. My question is this: Used Dell Poweredge T320 systems are relatively cheap, they come in a good form factor, they come with a sas backplane (I have 8 3.5in sas drives to connect), they can come with dual redundant power supplies, and the rear io layout looks like it would fit a standard motherboard. I know that dell uses propriatary motherboard sizes. How hard would it be to fit in an eatx motherboard? Is it as simple as threading in some inserts for motherboard standoffs? It typically comes with 500 watt power supplys, but dell made up to 1100 watt power supplies for these chassis. Would it be as simple as swapping in the higher power units, or would I need to find a different one of the power distribution PCBs from one of the higher end skus? Additionally… I’d kind of like to air cool this pc. I know that most of the options are for AIOs and I was wondering if you guys had any recommendations as far as air cooling goes. I’m using the asus sage threadripper board with the 12 core threadripper pro 3945wx. I won’t be stressing the cpu out that much, as this is more of a server-ish build. Given what I’m going after what would your recommendations be. Am I crazy for thinking about going with used server cases and modifying them for an eatx motherboard? It just seems crazy to me to spend high triple figures on a brand new case (most of which don’t come with redundant PSUs) when you can get a used server with a sas backplane and redundant power supplies for under $300. Would cooling be a significant issue space wise? I noticed that the larger noctua threadripper coolers are only a single tower design. Wouldn’t a dual tower design be more efficient? I’m not super concerned about noise, but I don’t want data-center level jet engine speeds and sound. I assume that the 92mm noctua coolers are very loud. Please let me know what you guys think, what you’d recommend, and what issues you think that I might run into.

Thanks. - Justin Utherdude

Dell chassis are great when used with official Dell hardware. Outside of that tight association you are hit or miss. The answer is, it depends. Two PowerEdge servers with identical model numbers could have different internal hardware layouts. Some Poweredge of that era used daughterboards for card connections that an EATX would have issues accommodating. Those cases are also loud under load.

If you’re not afraid of a Dremel tool and appreciate innovating on the fly with millimeters of clearance then it should be fine. If you are not into DIY then a commercial case would be better.

Do you need rack mounted? If not, get yourself a decent size EATX case. I got a Cooler Master C700 Black Edition for my TR PRO build. I used BeQuiet! fans and a CoolerMaster AIO and it is whisper quiet. Tons of HDD space and completely configurable internally. Add a SAS controller card and you’re away. There’s many other options.

I don’t mind liberal usage of a dremel. I was eventually looking at rack mounting it. One of the big things that a used server has going for it is a sas backplane for hot swapping drives, right? Wouldn’t I need a sas expander or multiple hbas for 8 sas drives in a standard case?

I’ve never done a custom 8 drive build, but I’ve dealt with at least eight different models of PowerEdge series servers professionally over the years. I would think you should be able to get a SAS controller card (Dell PERC series if a Dell case) and hook up your drives easily on the original.

Here’s a comprehensive list of PERC controllers.

The issue is when you add a non-Dell board you lose the designed interconnects that usually happen through the daughterboard (or PCI Riser). Any PCI cards you add at the rear would need to be tied into the new Mobo. Not impossible, but could be a PIA depending on clearances and cable routes. Also, consider air flow being impacted because of the new additions.

This is an old HP server, but shows the riser and associated rear slot cards as an example. Replace the mobo and you’d either need to make sure you got a replacement with the riser slot in exactly the same place (presuming it fits and does not have a proprietary keyed slot), or rig up correctly grounded connections back to the Mobo some other way.

Good luck.

image

I believe that the t320 uses mostly stock pcie components. I think that the daughterboards are for power distribution and the sas backplane. I should be able to connect to the backplane with an hba. The power distribution is something that I may have to get creative with. I’m also going to go with a forbidden router. I need to figure out if xcp-ng can run something like tnsr to get 100gbe routing working. Where’s wendell when you need him! Running both a nas and a forbidden router on the same server with 100gbe networking sounds like a wendell problem.

Flying to Paris for lunch on the banks of the Siene with a Supermodel, I expect.

Those tall, handsome YouTube sensations get all the breaks. :+1:

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.