I am looking to build a NAS using Unraid, and also run a few VMs simultaneously. I could use some case suggestions!
Some background: Got my hands on 64GB DDR4 2100Mhz ECC ram. Was hoping to get an PowerEdge T430 that my job was liquidating but that seems to be a fantasy now. I’ve been playing around with a lot of servers lately and I’m falling in love with the idea of hot swap drive bays. I currently own an PowerEdge R710 and T420 but cannot keep both of them running due to electricity costs. Also, I cannot add the 64GB of DD4 memory to those systems since they run on DDR3. I also run a Win10 VM as a game server (currently running Valheim) and I’d like to upgrade that to Windows 11 at some point.
So in light of all of this, I’m looking to try and get the closest thing to a PowerEdge T430 holding 3.5" Drives. My first thought was to just buy any of these 8 Bay front loading cases, but been getting mixed reviews. Also half of them only support 2.5" drives as it is. Then I thought, maybe I should look into cases with 5.25" bays and add ICYDOCK-like drive bays installed, but these 5.25" bay cases are a thing of the past.
I’ve been looking at a lot of Wendell’s content and I want to get my feet wet with EPYC. I also want to buy a few drives to start, but also supplement several of my older drives until I replace em all. Looking forward to the replies!
I’m using a Silverstone CS381 which is a good chassis in a small form factor while still having 8x 3.5" hotswap drive bays and SFF backplane. I’m really happy with it.
You can get a standard PC chassis with lots of 5.25" slots and fill it with Icy Dock, but you end up paying the same or more. Icydock products are great, but they’re not cheap.
If you need more than 8x 3.5", rackmount chassis are the way to go.
In terms of power bill, consumer board+CPU (e.g. Ryzen) are much less demanding on power. I have a fully kitted out Ryzen 5900x server with 10x drives, 4x DIMMs, 10Gbit networking and 8 NICs running at 75W idle and 130W low to medium load.
I may be inheriting a ~20U Half rack in the near future. I figured if I get that the case wouldn’t matter so much since I’d only really need about 10 U. (I’ve got a rack mount firewall, two cisco gigabit switches on top of the r710 and T420).
I do want to go with the build your own route. Just feels more fun
I do like this case but I was kinda sad it was a Micro ATX case. Your power consumption is very appealing and is making me want to change my mind.
I wanted to go with EPYC mainly for the PCI-express lanes. Wanted to go with an ATX case with more PCI-E lanes in case I wanted to add more things to it in the future. Are you using a SAS/RAID card for your 10 drives?
That’s really the power of Ryzen. Better performance per core than EPYC just because of the higher clock speeds and you can tweak down power consumption with ECO mode or conventional undervolting. With EPYC Milan, the lowest TDP is 180W and you can’t really tweak much and end up with lower clocks than Ryzen.
PCIe lanes and/or memory is really the selling point of EPYC. If you need it, you really need it and nothing beats server platform then.
I got 8x SATA ports, IPMI and on-board Intel 10GbE LAN on my board, so I didn’t have the need for an HBA or 10GbE NIC. I only have a quad Gbit NIC (pfSense VM) in one slot and temporarily a GPU in the other slot (gaming VM).
6x HDD + 2x SATA SSD (boot drive for Proxmox) + 2x NVMe =10. Case still has 2x 2.5" internal spots left in case I want some U.2 drives or more SATA SSDs.
I probably take the GPU out once I got myself a new gaming PC and have the slot ready for HBA or NVMe connectivity in case I need more storage.
Slots on consumer boards usually only matter if you don’t have IPMI and need a GPU or if board doesn’t come with 10GBit networking and you need an extra NIC for that. But that’s the same with EPYC boards on a larger scale. I’m much more fond of the boards that come with lots of slimSAS/OCulink so you don’t need 2-3 extra cards for HBA and converter AIC.
I want to implement 10Gb too. I’m using some older Cisco Catalyst 3750G switches with 4x 10Gb SFP ports. I wanted to buy a 1 or 2 SFP NICs and connect them to the network that way. I don’t have a 10GbE switch.
Also looking to add a GPU at some point, most likely for Plex encoding and such.
You can always get some SFP+ to RJ45 transceiver. Saves a slot and is cheaper than a SFP card and you can keep your infrastructure while still using 10GBase-T.
CPU can do transcoding too, as long as you don’t use an old quad core. I don’t see a problem unless you’re transcoding >2-3 streams at the same time or you go really cheap on CPU in the first place. Not only saves a slot, but also a GPU. GPU idle power 24/7 adds up too.
Did not know that these existed. Might go with a simpler build after all then. Definitely going to look into AM4 Boards in the space. I was originally looking into this style of motherboard but removing the SFP cards does reduce what I want. Your suggestions are amazing
When I was building my home server, I wanted low-power (0.3€/KWh) and low-noise and I don’t want to waste space (disturbs feng-shui alligned girlfriend) and spend money on a rack +rack equipment. I went with X570D4U-2L2T, which costs as much as some cheaper EPYC boards, but got me everything I wanted in a low-power micro-ATX package.
240mm AIO from Be Quiet. I also wanted to have less heat in the case itself, which was another factor. 40mm Noctua for the chipset and 120mm fans in the side keep everything on the board well ventilated at very low rpm.
I asked support for the height of the pump and it does fit, but you won’t get a finger between it and the drive bays
For the next time, Fractal Design Node 804. mATX, 5 expansion slots, 8x 3.5, 2x 3.5/2.5, 2x 2.5, 5x 140mm fan mounts, 5x 120mm fan mounts, 10x total fan mounts, comes with 3x fans, max cpu cooler heigh 160mm. Worst case, you can probably stack them on top of each other