Building my first home server + NAS (Giveaway)

Hello!

I am building my first dedicated, rackmount, homeserver/nas. I currently host a handful of webpages/services, around 30 docker containers, an assortment of VM’s, a pfSense router, a Plex server (going to try Jellyfin as well), a freePBX server for our business, and 4 dedicated game servers. This is all spread out over 4 physical machines. I am planning to consolidate everything (except pfSense) to this new server. For spare parts I could dedicate to it, I currently have 5950x. I don’t have to use it in the build if you don’t think it is ideal, but it is there if it is.

I was hoping you guys could give me a parts list/suggestions on filling out a $6,000 USD budget (the $6,000 is not a hard-stop, it can be flexible if slightly over) to be as efficient as possible while maximizing the entire budget (5950x not included in budget if you think I should use it). Some points I can list are as follows:

  • IT is my passion and I am finally looking to become gainfully employed in the field (So expandability is a point before any concerns of overkill)

  • I would like to include a switch with at least 24 ports (I assume I should shoot for a managed switch to practice with, but if you think unmanaged is better or I should just virtualize managed for practice, the call is yours)

  • It will need a patch panel (cat6/6a)

  • It will need a rack of at least 12u (more if you think that is too small considering reasonable expandability)

  • It will need a UPS

  • Noise is something I want to reduce as much as possible. I considered a 4u chassis for this reason so I could implement 120mm fans. I hoped something would exist in 4-5u that supported 140mm chassis fan but I saw nothing.

  • I want to create zfs pool with at least 60tb and 1-2 slots of redundancy (nothing in here will be critical so far so I assume 1 slot for redundancy is fine, but you guys know better)

That is all I can think of so far but if you need any further clarification, please let me know and I am more than happy to help. Again, thank you so much for your expertise, I just don’t want to screw this up as it is my first go of it.

I know everyone’s time is valuable, so I will be giving away a $25 Steam gift card to the most helpful response that I end up going with. [Deadline is 6/18/22, 10pm CT]

===== UPDATE =====

I definitely have gone over budget, but I believe it is worth it. Here is my updated tentative parts list: Imgur: The magic of the Internet.

Let me know what you guys think/suggestions.

Again, thank you everyone so much for all your help!

1 Like

Noise in rackmount components is not usually a factor considered by vendors, you will have an easier time silencing the servers than finding relatively quiet UPSes (fanlsess UPses stop at 1500VA) and switches
I would suggest thinking about relocting the rack to a garage/basement/attic area where noise will not be a factor …

I’ve got a few questions first of all:

  • Speed Requirements for the Switch? How many 10Gig or 25Gig Ports?

  • Do you want to buy everything brand new? There’s lots of money to be saved if you go with a used Rack and UPS (but replace the batteries)

  • Is it an option for you to keep the Router separate? It can be a pain to have a virtualized Router/Firewall and at this kind of budget I’d never recommend to go that route

I was thinking dual 10 gig sfp+ would be sufficient on the switch that I can route to the server. Unless you think one 25g would be ideal. I just thought the switches with 25g ports would raise the price quite a bit.

I would like to buy everything brand new.

I will be keeping the pfSense router separate.

I would love to but I am not sure there is anywhere else the wife would be happy with.

Then I am afraid most of your time will be spent in sourcing low noise components/compromising on expandability and performance.
Also, to give you a meaningful assessment, you should state what power draw (idle and max) you would expect/be able to economically sustain … I would assume something drawing 2Kw when idle and 3-4KW under load may not be what you originally intended :slight_smile: … keep in miind that a set of components drawing 0.5KwH at idle will consume 12KwH/day - ~4500KwH/Year and depending on where you live that may not be something you are willing to pay …

Our rate is currently $0.06/kWh. It shouldn’t be too much of an issue. I am not sure how to go about calculating the idle and max. I know if I go with the 5950x I believe it draws 15-25 idle and ~105 max from what I can find.

1 Like

Ok, do you want/need GPU passthrough (Plex/Jellyfin transcoding)and/or do you need to have hardwar e dedicated to specifi VMs? That is the one that will drive the choice between desktop components and server/workstation ones …

If we stick with the 5950x then I will need GPU passthrough for transcoding. I can’t think of any other hardware I would need to specifically dedicate to VM’s other than some NIC ports at some point.

If you use the 5950 you will be limited in PCI express lanes, so the ‘at some point’ part will soon become ‘at no point’ :slight_smile: because with one GPU (8x) and one dual 10Gbit card (8x) you will be using pretty much all the lanes of your motherboard, so there will not be any wiggle room for additional hardware to be passed through.

An alternativw would be to go epyc/threadripper, but we’re talking 150W idle power draw and upwards, ECC memory and in general higher component prices, so your budget may not be enough for the full rack/network/server/switching/ups

Another important question … one single non-redundant server, right? no cluster or shared storage, right?

1 Like

Yes, it will be a single non-redundant server. I was hoping to utilize ECC memory for this build as there will be a NAS/filesharing function. Do you think ECC is unnecessary? The MOBO I had originally looked at was the Asrock Rack x570D4U (or one of its variants).

Welcome aboard!

At some point al of us with a home server said: enough is enough, time to get this mess in order. We got affordable and easy to manage technology now to do this at home.

We need some more data on how demanding all these applications are. A 5950x is probably faster on a per-core basis than your old legacy zoo of hardware and 16 cores isn’t trivial, but so is your demand in terms of services.

Ryzen is a great platform for a home server, because it’s both power- and cost-efficient. But also has limitations in terms of memory (bandwidth) and PCIe lanes for expansion and connectivity. I went with a AsRock Rack X570D4U-2L2T which is one of the best Ryzen servers boards out there.
But there is always the option to go full server platform, with ECC memory and all the capacity and bandwidth you ever need. Obvious drawbacks are form factor, power draw aka cooling aka noise and price. EPYC is in a good spot and you can get rather good and cheap boards and lastgen EPYC Rome CPUs are reasonable power efficient and you have options up to 64 cores if need be.

4U chassis have the height to fit 140mm fans, but the width on the standardized 19" racks prevents using 3x 140mm fans in the front. That’s why you see “only” triple 120mm fans on selected 4U chassis.
And having triple fans in the front leaves you without any hotswap drive bays. This can be compensated by having an external SAS enclosure for HDDs, but adds cost for another 2-4U component.
Example 3x120mm fans: CX4170a | Sliger
Example classic 4U storage server front: 4U 4736 - Inter-Tech GmbH

No one got ever fired for buying APC. But there are smaller competitors out there for a fraction of the price. Determine your power draw and size UPS to have 60-70% load. And plan for future rack expansion so you don’t suddenly need a bigger UPS (pricey).

ZFS with 6+ drives, you usually go for RAIDZ. And when talking 60TB+, a scrub or resilver process will take quite some time (days to weeks depending on how much data has to be shuffled around). So people go for 2 disks worth of parity, because of days of stressing resilver might knock out another drive.

I would recommend going for 6-wide-RAIDZ2 for 64TB of usable storage with 16TB drives. or +16TB for each additional drive up to no more than 10 disks total. 16TB is a sweetspot right for new drives in terms of $/TB.
Choice whether to use cheaper enterprise 24/7 drives (Toshiba MG08/09, Seagate Exos) or buy “for NAS” drives (WD Red Plus NAS) is a matter of taste really. I personally went with Enterprise drives and bought spares for the money I saved.

Also plan for backups.
Things can always go wrong and despite ZFS being very resilient, human failure or force majeure can never be ruled out.
I have 4 drives in a striped mirror configuration and have 2 drives as spares that also function as backup once a week. Incremental replication is really fast and can be automated with like 10 clicks.

I don’t know how much I/O you expect and how demanding all the services and containers are in terms of storage. In any case, reserve 32GB or more for ZFS ARC and one or two NVMe SSDs for caching purposes. Read cache (L2ARC, [code]cache[/]) NVMe can be a cheap consumer drive.
If you expect lots of writes that will bring HDDs to it’s knees (especially when using parity RAID configuration like RaidZ), You can use RAID10 (striped mirror) configuration and/or use a SLOG (NVMe with high endurance, typically enterprise-grade SSDs like Micron MAX series or Intel Optane P4800x/P5800x)

The more tiering and thus caching your pool has, the more fluent and performant will it be.

Networking speed an infrastructure are important. With 60TB+ and your budget, I expect at least the NAS to be 10Gbit. But if 1GB/s isn’t enough and you want some more bandwidth to your main workstation, people like to buy 25- or 40GbE NICs (like Mellanox ConnectX-3) off eBay to directly connect storage with their workstation without the need of costly switches for 25/40Gbit. As long as you get a cache hit, ZFS can deliver this bandwidth although HDDs are too primitive by themselves to do it.

If you want 10Gbit for your house with that 24 port switch, you have to decide if you want to go RJ45 via copper (Cat6a or higher) or going SFP and fiber. 24x10GBase-T switches ain’t cheap and are more on the power hungry and noisy side.
Or maybe 1Gbit will do for most things, but you want higher bandwidth for a limited amount of devices. This will be much easier and cheaper.

1 Like

Thank you for the detailed and informative reply Exard3k. Do you think it may be worth waiting for the Genoa family of Epyc around the corner? Do you think we can expect better thermals, lower power usage, some decent performance gains at a reasonable price (relative to Milan)?

Zen4 is going the more power-hungry route. We heard rumors of up to 400W TDP for 96core flagship SKU compared to 280W max on Milan. I expect more performance/watt as well as more raw power, but everything on a higher base level. While there are Rome SKUs with 150W, there probably won’t be a Genoa SKU under ~225W. For price/performance and power/noise, people will go for lastgen because Rome and Milan still have 128 lanes and 8xDDR4 memory channels which is still beyond overkill for a homeserver if you fully kit it out. 16-24 core Rome SKU on a Milan-ready board with 4x32GB RDIMMs will get you a lot.

It’s easy to oversize and overbuy your homeserver. I always advocate to buy cheaper and have room for expansion of drives and slots for future upgrades.

Typically a homeserver with 8 cores and 32GB memory for ZFS NAS and a couple of VMs and containers usually gets all jobs done. pfSense Router, PLEX, Jellyfin, Nextcloud, etc. are usually idle most of the time and ZFS doesn’t compress 100s of GB of data all the time. So CPU for most home server stuff sits idle or on low load most of the time. While desktops mainly benefit from beefier CPU&GPU, servers like memory because you can cache more and start&run more VMs and containers.
That Ryzen 5950x you got can be set to 65W TDP via ECO mode and runs very cool while only sacrificing 10-15% on all-core load. I’m using ECO myself on my 5900x because I squeezed everything in a small form factor where heat is really a concern and I wanted it to be low-power anyway. I had to work hard to bring my 12 power-limited Ryzen cores to a limit without adding artificial workloads like benchmarks, render- or scientific compute tasks.

Ryzen 5900x and 5950x (both 105W TDP) have stock PPT of 142W, so expect this value under the most heavy circumstances. TDP is a more or less useful guideline, but doesn’t really tell you what max consumption is.

2 Likes

i am another vote for an EPYC server. (it is what my home lab consists of) and your build sounds like you need to end up in the same place as the system i have. i will go over some pointers and a couple things i learned along the way.

don’t virtualize the ZFS host. yes it is possible, i actually wrote one of the guides on virtualizing freenas. but dealing with it long term is a huge time sync as updates on the VM host or ZFS host can seriously change the performance and reliability of ZFS.

build a 2u EPYC server for your VM host.
build a 2u or 4u storage server with OLD opteron or xeon parts that support ECC.

EPYC MILAN is fine for a home lab, though you might wait till the next release so you can get a better deal on a MILAN.

you MIGHT decide that you want to run a GPU for transcoding or desktop passthrough. decide this NOW, changing a case later on because you want to add this later is a pain.

passthrough real NICs to any server that is a domain controller, or router.

USE PROXMOX

2 Likes

Exard3k, Thank you so much for your help. I would like to send you the gift card. For some reason I cannot message individuals or find the button, so if you could, send me a PM at your convenience so I can send you the card. Thank you everyone for your help and wish me luck.

I definitely have gone over budget, but I believe it is worth it. Here is my updated tentative parts list: Imgur: The magic of the Internet.

Let me know what you guys think/suggestions.

Again, thank you everyone so much for all your help!

I realize this is old so I may be too late here, but on the off chance you haven’t purchased yet: that RAM will only work for the Ryzen board, not the EPYC board. For EPYC, you need registered memory, so for Kingston look for the KSM32Rxx SKUs. (You can also go to Kingston’s memory page and search for “h12ssl”, it will list the SKUs compatible with the board.)

Also, the 7443P is 8-channel, so from a performance (memory bandwidth) perspective more DIMMs is better: consider 4x 16GB instead of 2x 32GB.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.