Toying with the idea of a new home server

Hey all,

Looking for some advice. I’m considering changing up my homelab. Currently, I’ve got a stack of small NUCs that are operating as a NAS and separate kubernetes cluster. I’d like to switch to a rackmount server or full size system. However, this needs to be quiet and somewhat power efficient. I’m looking for <90w idle, ~750w full tilt. The biggest issue is noise. I need this to be quiet, because it will be housed in the laundry, attached to the master suite of the house.

Now for my system requirements:

  • 256GB memory
  • 16-32 cores 3GHz+ (probably closer to 16 is good here, but I’m still thinking on this)
  • 8+ 3.5in disk bays. Ideally 16 3.5in bays.
  • 2x thunderbolt/usb4
  • 10GbE (or PCI slot for it)
  • PCI capacity for at least 2 dual-slot GPUs.

*I have no need for IPMI, I’m rockin a pikvm

This will be a general purpose homelab machine. I’m definitely going to be running a bunch of homelab software, kubernetes, a couple game servers, possibly a windows VM (for said game servers) and machine learning experiments (both training and inference).

I currently have all the storage I’ll need for this project, so I’m hoping to just get something I can transplant my NAS into. (NAS is just a Rocky Linux ZFS box)

I’m a bit out of the loop when it comes to what modern hardware is available, so I’m looking for advice on a high level.

Questions I have:
Is rackmount the way to go?
Is there a server out there that’s quiet from the OEM, or is there a sku that has a relatively easy path to quiet-modding the fans?
Should I be looking at current-gen desktop hardware?

P.S. Oh, should probably mention this; I’m just in exploratory phases right now, so I have no budget at the moment. Let’s consider cost as a secondary factor, but no hard limits.


If you want my suggestion, I will tell you to build a single server with an Eypc CPU. It will with all those requirements and be well under said power requirements (minus the GPUs’s).

1 Like

In retrospect Sarge, I would have gone for something like this: Its spacious enough to allow the installation of 120mm or maybe even 140mm Noctua fans.

rather than Sliger. Sliger is ok, it just have a bit of QC issues that I wish they would fix.

1 Like

They had QC issues? damn, my sm580 was perfect up until I had to drill a hole in it :rofl:

I appreciate that; I’m considering something like that, Rackmount is just going to sit on a counter, or maybe I’ll make a bracket to mount it on the wall, not sure…

I’ll see if silverstone offers something with a bunch of 3.5in disk sleds. Or maybe I’ll have to get a disk shelf, not sure yet… :thinking:

Any advice on the hardware platform? I think with the memory requirements, I’ll need to go with threadripper or epyc, right? I know you can get 64GB DIMMs of ddr5, but… that’d be a lot of money.

This is where I’m a bit out of my element. Do we go epyc or threadripper? Intel is a hard no from me, just not a compelling argument for it.

1 Like

See my previous statement. If my mind if you want a server then get a server CPU if you want a workstation then get a workstation CPU. As for the memory that really isn’t that much memory for such a platform, and honestly for all the virtualization you are wanting you might need more.

1 Like

I’ve just done a rack build, so here is what I have with some comments (I know it is not quite your spec, but…):

Rosewill 4500U case
AMD 7900x jwith 32Gb RAM Asus B650E-F Rog Strix Mobo
4x Seagate Exos 14TB SAS (ST14000NM0081)
HBA 9500-16i
Nvidia Quadro P400
Thermalright peerless assassin
Aquantia dual 10Gb NIC

The case comes with lots of fans - I have them all connected. It is very quiet, even at full tilt. The total power consumption is around 200w. So a win on the noise issue! Compared to an old supermicro, it is clunky and documentation is poor (for example fitting rails) but it works and is cheap!

Finding a CPU heatsink was a pain - the stated specs are hard to analyse as all manufacturers seem to do it differently. My CPU tops out around 80C at 100% CPU (running folding at home and NAS duties) with PBO and EXPO enabled.

Each disk “bay” holds 5 disks with a 120mm fan at the front. Truenas Scale notes temps as 30-60C - but as they are all in the same bay I’m not sure how much I trust that. They are refurbished drives, so that may have an impact as well. Room temp is currently 22.9C at 43% humidity. The case is in an open rack.

Hope that provides some info to help with your decision.

1 Like

Go rack mount if you have enough gear that will fill a rack. E.g. central switch, your NAS, backup NAS, PiKVM, PiHole, etc.
If you don’t plan to have at least that much gear and you’re happy with the current location of your gear, don’t bother.

I second the Rosewill 4500U case. I use it with a X99 mobo 15x 8TB SAS drives, HBA+Expander, AIO, Mellanox 40gb card. As much air flow as you like and consequently quiet.

Current-gen desktop hw is severely PCIe expansion limited. You can make it work as a NAS, but will likely not able to hold enough gear to support your requirements as a kubernetes host, game server, machine learning lab.

Upgrading to a current-gen workstation (HEDT) platform (AMD or Intel based) will require more $$$ and I have my doubts that a machine with your requirements will run <90w idle.

Last-gen workstation will not make fitting into your power requirements any easier.

Maybe the Sienna (AMD EPYC 8000 series) platform offers enough compute, expansion, power efficiency for your use case.


Yeah, I think I was considering desktop for the cost savings and power envelope, but looking into it, I don’t know a non-HEDT platform that supports 256GB of ram on the market, so I think I’ll be SOL unless I go that route.

Ohh, that looks excellent. It’d be ideal if the disk bays were hot-swap, but that’s a sacrifice I’m willing to accept because everything else is pretty much perfect.

That’s an excellent result. Thanks for the review!

As long as it’ll fit an EATX board, I’m happy. (which, the marketing info says is true)

Yeah, reading this back (wrote the OP after a long day at work, dreading dealing with my current infra), I think I’ll need more than 20 lanes of PCIe. I suppose I could get away with 24, but that’s pushing it. 2x GPUs puts me at 16 alone, let’s pretend the HBA can run at 4x and the 10GbE can run at 4x, I might be able to work with that, but it’d be tricky.

Yeah, I think the power efficiency is going to just be a pipedream at this point.

1 Like

If you can live with 12 instead of 15 bays, Rosewill has the L4412U also which is a hotswap version of that same case. The three brands I was looking at for cases were Rosewill, Silverstone, and Supermicro. The Supermicro cases are tanks, but they’re like $800 at retail. Silverstone is kind of in the middle.

I’ve heard the rails for the Rosewill case can be a bit problematic, but it’s going to hard to find something better in that price range without taking a chance on used ebay stuff.

1 Like

I think I could definitely live with that. I was hoping to be able to double my current 8x8TB array one of these days, but I suppose I could find a way to work around it.

My thoughts on rackmount stuff is to just stack it on a shelf, rather than actually deal with rails. I’ve got 2 rackmount bits right now, a R510 which is just a big ol paperweight until I can get around to the fan mod, and a 10GbE 8 port dumb (passes vlan tags, so that’s nice at least) switch.

Maybe I should just do the mod on the R510 and call it a day with that. :thinking:

1 Like

If you could use the R510, do the fan mod, and replace the R510 motherboard with either an AMD workstation motherboard (depending on your needs) or an AMD server motherboard. I don’t know if such a drastic modification could be done, but that is what I would do in your shoes.

Not a terrible idea, but that’s a lot of work haha. Pretty sure the R510 is doing proprietary board form factor here as well, so I’m not sure it’s helpful.

I’m thinking that threadripper is unfortunately the best way forward. the epyc servers are clocked so low and cost so much, I have a hard time justifying that. But I’ve also seen a lot of nightmares with threadripper being buggy lately, at least the 7000 series. :confused:

Teaching sand to think was a mistake.

My thought process was that you already have a case with the R510, so why recycle it and purchase a new case? If I had an R510, I would see if the mod I suggested could be done. I am saving for a second—or maybe third-generation used AMD Epic system and a rackmount case for it. If I went with a Theadripper, I would purchase it from Puget System instead of buying the parts myself and putting them together. I think you won’t save much with a new system.

So it looks like tbe power supply is a standard ATX 24 pin, with a supplementary 12v 8 pin for the CPU power.

This might actually be doable. I’ve got an old xeon motherboard i could test this with. Would probably need custom heatsinks though. :confused:

While it does look like it might be a somewhat standard screw post pinout, its not am ATX standard.

Im thinking i miiight be able to fit a matx board in here?

The issue is the power supply connections. Id need to get extensions to run them where theyre needed. But at least i wouldn’t have to solder the fan connectors on a standard motherboard for the noctuas.

That 24pin ATX pinout doesn’t look right to me

1 Like

Dammit You’re right.

Way too much 12v and it’s completely pinned wrong.

of course I’m making the assumption they actually color coded the wires to their voltages.

1 Like

Yellow is +12v

Orange is +3.3v

Cant read the others.

Here we are…
Dell pinout:

ATX Pinout:

So it looks like… we’re missing -12v and possibly the green power-on signal. If 4, 5 or 16 are the “turn on” switch, or maybe the server supplies are always on? I’m really not sure.

So the question is: are -12v and the power-on signal carriers needed?

It goes without saying, there’s a lot of +12v, and not enough +5v.

To be clear, I’m in the process of considering a pinout-conversion, of sorts, where I just take a custom 24 pin extension kit, re-pin it so that it converts from this proprietary pinout to ATX, and see if an old motherboard will boot or if I just made a smoke machine.



So that would make sense, if I wanted to spend $7k on a system. Puget is insanely expensive, and there’s a good reason for it, but I just can’t justify the expense. My budget is closer to half that.

That why I am looking at a second—or maybe third-generation used AMD Epic system and a rackmount case.

1 Like