Case Suggestions/Guidance for NAS/Hypervisor (New or Old?)

Hey Everyone, new to the forum!

I am looking to build a NAS using Unraid, and also run a few VMs simultaneously. I could use some case suggestions!

Some background: Got my hands on 64GB DDR4 2100Mhz ECC ram. Was hoping to get an PowerEdge T430 that my job was liquidating but that seems to be a fantasy now. I’ve been playing around with a lot of servers lately and I’m falling in love with the idea of hot swap drive bays. I currently own an PowerEdge R710 and T420 but cannot keep both of them running due to electricity costs. Also, I cannot add the 64GB of DD4 memory to those systems since they run on DDR3. I also run a Win10 VM as a game server (currently running Valheim) and I’d like to upgrade that to Windows 11 at some point.

So in light of all of this, I’m looking to try and get the closest thing to a PowerEdge T430 holding 3.5" Drives. My first thought was to just buy any of these 8 Bay front loading cases, but been getting mixed reviews. Also half of them only support 2.5" drives as it is. Then I thought, maybe I should look into cases with 5.25" bays and add ICYDOCK-like drive bays installed, but these 5.25" bay cases are a thing of the past.

I’ve been looking at a lot of Wendell’s content and I want to get my feet wet with EPYC. I also want to buy a few drives to start, but also supplement several of my older drives until I replace em all. Looking forward to the replies!

Do you currently have a rack or how are they mounted?

There are still some good tower options from Lenovo, Dell and HP (or even MicroServer).

But there is always the option of building your own.

I’m using a Silverstone CS381 which is a good chassis in a small form factor while still having 8x 3.5" hotswap drive bays and SFF backplane. I’m really happy with it.

You can get a standard PC chassis with lots of 5.25" slots and fill it with Icy Dock, but you end up paying the same or more. Icydock products are great, but they’re not cheap.

If you need more than 8x 3.5", rackmount chassis are the way to go.

In terms of power bill, consumer board+CPU (e.g. Ryzen) are much less demanding on power. I have a fully kitted out Ryzen 5900x server with 10x drives, 4x DIMMs, 10Gbit networking and 8 NICs running at 75W idle and 130W low to medium load.

1 Like

I may be inheriting a ~20U Half rack in the near future. I figured if I get that the case wouldn’t matter so much since I’d only really need about 10 U. (I’ve got a rack mount firewall, two cisco gigabit switches on top of the r710 and T420).

I do want to go with the build your own route. Just feels more fun :slight_smile:

I do like this case but I was kinda sad it was a Micro ATX case. Your power consumption is very appealing and is making me want to change my mind.

I wanted to go with EPYC mainly for the PCI-express lanes. Wanted to go with an ATX case with more PCI-E lanes in case I wanted to add more things to it in the future. Are you using a SAS/RAID card for your 10 drives?

That’s really the power of Ryzen. Better performance per core than EPYC just because of the higher clock speeds and you can tweak down power consumption with ECO mode or conventional undervolting. With EPYC Milan, the lowest TDP is 180W and you can’t really tweak much and end up with lower clocks than Ryzen.

PCIe lanes and/or memory is really the selling point of EPYC. If you need it, you really need it and nothing beats server platform then.

I got 8x SATA ports, IPMI and on-board Intel 10GbE LAN on my board, so I didn’t have the need for an HBA or 10GbE NIC. I only have a quad Gbit NIC (pfSense VM) in one slot and temporarily a GPU in the other slot (gaming VM).
6x HDD + 2x SATA SSD (boot drive for Proxmox) + 2x NVMe =10. Case still has 2x 2.5" internal spots left in case I want some U.2 drives or more SATA SSDs.

I probably take the GPU out once I got myself a new gaming PC and have the slot ready for HBA or NVMe connectivity in case I need more storage.

Slots on consumer boards usually only matter if you don’t have IPMI and need a GPU or if board doesn’t come with 10GBit networking and you need an extra NIC for that. But that’s the same with EPYC boards on a larger scale. I’m much more fond of the boards that come with lots of slimSAS/OCulink so you don’t need 2-3 extra cards for HBA and converter AIC.

I want to implement 10Gb too. I’m using some older Cisco Catalyst 3750G switches with 4x 10Gb SFP ports. I wanted to buy a 1 or 2 SFP NICs and connect them to the network that way. I don’t have a 10GbE switch.

Also looking to add a GPU at some point, most likely for Plex encoding and such.

You can always get some SFP+ to RJ45 transceiver. Saves a slot and is cheaper than a SFP card and you can keep your infrastructure while still using 10GBase-T.

CPU can do transcoding too, as long as you don’t use an old quad core. I don’t see a problem unless you’re transcoding >2-3 streams at the same time or you go really cheap on CPU in the first place. Not only saves a slot, but also a GPU. GPU idle power 24/7 adds up too.

Did not know that these existed. Might go with a simpler build after all then. Definitely going to look into AM4 Boards in the space. I was originally looking into this style of motherboard but removing the SFP cards does reduce what I want. Your suggestions are amazing

When I was building my home server, I wanted low-power (0.3€/KWh) and low-noise and I don’t want to waste space (disturbs feng-shui alligned girlfriend) and spend money on a rack +rack equipment. I went with X570D4U-2L2T, which costs as much as some cheaper EPYC boards, but got me everything I wanted in a low-power micro-ATX package.

For the CPU cooler on here, are you using an AIO or low profile cooling?

240mm AIO from Be Quiet. I also wanted to have less heat in the case itself, which was another factor. 40mm Noctua for the chipset and 120mm fans in the side keep everything on the board well ventilated at very low rpm.

I asked support for the height of the pump and it does fit, but you won’t get a finger between it and the drive bays :slight_smile:

Great news guys, I got a hold of a 24U Rack with about 47 inches of depth!

My options on cases just opened up a bit, so I may go with a more rackmount option here.

Just purchased 3x 8TB Drives so the build is on the way!

I like my Sliger and it has 3x 3.5” drive mounts.

So I’ve completed my build and I’m ready to have some fun with it!

Went rackmount with the Rosewill RSV-L4412U 4U Server Chassis.
AMD Epyc 7282
64GB ECC DDR4
AsRock Rack EPYCD8-2T

Bought a shelf to sit the server on as I hear the rails for this case kind of suck. I also wasn’t able to close the rack with the handles attached to the chassis.

Ordered some SFP+ to RJ45 for my 10GbE NICs

Thanks for all the help and suggestions!

1 Like

For the next time, Fractal Design Node 804. mATX, 5 expansion slots, 8x 3.5, 2x 3.5/2.5, 2x 2.5, 5x 140mm fan mounts, 5x 120mm fan mounts, 10x total fan mounts, comes with 3x fans, max cpu cooler heigh 160mm. Worst case, you can probably stack them on top of each other :slight_smile:

I looked at that case, but for some reason I was super obsessed with having hot swap drive cages.

1 Like

Aaah… but then you should of course have it. (Now I had to see if I could find a hot swap solution for my case. Hmm… ) :smiley:

This topic was automatically closed after 273 days. New replies are no longer allowed.