First time server, long time storage, halp?


Been a long time since I last posted on here, though the topic as basically a rehash with (hopefully) some improvements made that I’d like to get some opinions on.
You’ve probably seen this topic a lot, but I’m looking to build a home media/Plex server with decent storage potential and upgradability should it be needed. (Side note: apologies in advance for the wall of text below)
Few things to run down on this build, so might as well start from the top:

  • So far Ryzen is my CPU of choice as it looks to offer a good upgrade path and powerful enough for some transcoding and general use.
  • Motherboard was picked because it supports ECC and has an Intel NIC, along with enough SATA ports for expansion later on
  • ECC RAM, while maybe not necessary, is something I would prefer if possible (and with current prices, doesn’t seem that much more expensive)
  • SSD chosen as the main OS drive and L2ARC for speed (could split that, but not concerned for the moment, and trying to minimise cost)
  • Ironwolf seem somewhat reliable and the capacity would hopefully suit my needs for a while to come (thinking of just adding another VDEV in the future if needed)
  • GPU is just one I have already, doing nothing in my spare build
  • Currently rocking a Define R5 and would very much like to have the successor for acoustics and ease of use
  • Power supply was the cheapest fully modular of 80Plus rating from a brand I trust
  • 10G NIC will be a future expansion, not entirely necessary but nice to have should I set up a Steam cache or be moving huge amounts of data

For this, I plan to be running a RaidZ array using ZFS on Ubuntu, as I’m barely familiar with Linux at this time, and would prefer an easy to use OS for now.
I’d like to hear any ideas and suggestions you might have on this build, though I do have some questions as well:

  • Does the rule of “1GB RAM per 1TB storage” actually apply, and if so, does that mean “usable space” or “raw disk space”?
  • Are features like dedup and compression necessary within ZFS, and how much can they impact performance/usability?
  • Could an add-in NIC be used as the main connection, and would this have any impact on use/performance?
  • Would running an NVMe drive allow for use of all SATA ports, or still disable some?

Probably be a few months before this is built and set up (possibly done gradually), but I’d still like to get some external thoughts on it to hopefully refine the build before I start doing it.


Edit: Just finished watching Wendels server update, and it seems like my first question may have been answered, but I’d still like some thoughts on it

I’d go with FreeNAS.

The rule is for useable space, but for heavy sequential stuff like media files, you don’t necessarily need 1:1. However, L2ARC weirdly requires some extra memory to be effective. I have been told not to bother with L2ARC unless you have 64GB of RAM. That was from the FreeNAS forum though, so grain of salt there.

I’d get either a very inexpensive small SSD for the OS (16GB should be enough) or just use a USB stick. It’s not 100% required, but it’s little nicer to have them separate, and it will be less complicated to configure.

You should use LZ4 compression. It will have no meaningful performance impact. Definitely do not enable dedup.

Shouldn’t, just make sure it’s well supported by the OS.

I recommend using LZ4 compression and not using deduplication. Dedup hurts throughput and only sees a benefit if you’re likely to have a lot of identical data being written to the drive. Compression (LZ4 especially) can incresase your throughput because compression effectively increases the amount of bandwidth the pool has available.

Yes, it can. No problem with that, in fact, I recommend it! As long as it’s on the PCIe bus, you don’t need to worry about throughput. It’s USB that has performance issues.

This rule is a bit confusing. It’s a good rule of thumb, but in my implementations, I’ve never seen it get even close, except in instances where you’re using deduplication. I used to run ZFS on my 32TB nas with 16GB of ram. When I migrated (to BTRFS) I had something like 4TB available on it. I was using about 12GB of ram on it, running about 10 docker containers, accounting for approximately 40% of that memory usage.

ECC RAM can’t hurt. I’ve never encountered a problem without it, so don’t go around assuming you need ECC, but it’s definitely a good safety net.

One thing I noticed about L2ARC on an SSD is that it’s going to absolutely TRASH the SSD. Wendell touched on it in his recent video. You should really go back and watch that part. Basically, you want an enterprise-grade SSD that’s rated for hundreds of TB of data write per year (maybe even PB per year) because the L2ARC is rebuilt every time you reboot and on top of that, it gets writes all the time. I would strongly recommend against sharing your OS drive with your L2ARC. I made that mistake on an 850 pro and it was topping out at 80MB/s write (with dd) by the end of the year.

You might want to look at OpenMediaVault. It’s basically a web based frontend for Debian that you can use to do pretty much everything. the V4.0 beta supports ZFSOnLinux through a plugin. (It’s pretty seamless) I find the functionality in FreeNAS to be lacking for my needs, but OMV is much more fully-featured.

I recall him mentioning that, but I figured that applied more to his kind of heavy workload. I’ll rewatch it again for good measure.
If that’s the case, then would something like this be useful as a dedicated L2ARC drive?
I’d hoped to leave as many SATA ports as possible for the HDDs, but then I can always add in a PCIe expansion card at some point.

OMV looks pretty good, and a web interface would certainly be very useful, thanks for the suggestion.

Good to hear, was thinking of saving some money with a B350 motherboard and an Intel PCIe NIC, but would probably sacrifices some ports to do so. (the expansion card comes into play again though)

Excellent, hopefully shouldn’t have many duplicates anyway (and would be pretty small to begin with), I’ll look into the LZ4 and see how to apply it.

That’s reassuring, cuts a significant amount off the starter cost if that’s the case, will probably just use 8GB for now unless it starts having issues.

Other than that, do you think the hardware selection should be good? Seen a lot of small Ryzen servers on other forums, wanted to be sure it will work fine.

If you’re planning to run plex on it, I would recommend the 1600/1600x. You’ll get a lot of use out of the additional threads. I’ve got a E3-1230v2 and it’s not quite enough for high-bitrate transcodes.


Are you planning RAIDZ-1? I don’t recommend RAIDZ-1 on anything 4TB or larger because of the increased chance of losing a second disk during the rebuild.

If you can find a PCIe SSD, you might benefit from using that. Won’t take up a SATA port, side benefit of having more bandwidth.

I know at certain points, freenas specifically has not loved AMD but I’m not sure of the current state, so if somebody else wants to clarify this that’d be great

Those hard drives are good and have been well recieved, but I’d recommend not going for a RaidZ with drives that large. When a drive fails out of a RaidZ, the remaining drives are heavily stressed during the rebuilding process and with drives that large, the rebuilding process is prolonged due to the amount of data. This time increase raises the chances that a second drive could die during the rebuild, making the array un-recoverable.

To avoid this, I’d recommend adding a 4th drive and opting for a “Striped Mirrored Vdev’s” configuration, which is just ZFS speak for Raid 10. This allows for a maximum of 2 drives to fail, and offers higher throughput that configurations such as RaidZ or RaidZ2.

Using a stripped mirrored configuration would also allow you to potentially ditch the M.2, as your throughput would likely be higher than the 3 drive Raid Z. I would personally recommend skipping he M.2 and installing FreeNas to a good Usb 3.0 Flashdrive, which would keep your sata ports free while also giving you the option to add an M.2 cache down the line. For a gigabit connection, I think you’ll be hard pressed to really feel a tangible increase with the M.2 cache unless you’re say, running VM’s and using the array for VM storage.

You can look at the motherboard manual to find out more about how Pcie is split on this board, but if you want to not mess with it, just get a Pcie x4 to m.2 adapter and throw the M.2 in the second Pcie X16 slot on that motherboard. The upper two pcie 16x slots on your board share pcie and running both of them at 8x provides more than enough bandwidth for both your graphics card and that m.2 if you decide to keep it in there.

It has Ryzen support. Shouldn’t be a problem.

It allows for 1 drive per mirror to fail. Important distinction.

I will counter your recommendation by saying that RAID is not a substitute for backups, but if you don’t want to restore from backups, you can use RAIDZ-2 or RAIDZ-3.

Typically, the NVMe slots don’t disable the SATA ports anymore. That was a skylake-era thing that’s gone away for the most part. PCPP should have a warning if the board has that “feature”

One more thing: I’m not experienced with Ryzen and ECC. There are warnings on PCPP about the 1200 not supporting ECC. Can someone confirm that the 1200 supports ECC?

Certain ryzen boards and coffee lake / kaby lake boards do actually still disable some sata depending on your m.2 arrangement, but it depends on the board. Typically this is only an issue on boards with 2 m.2 ports, where one of the m.2’s shares bandwidth partially with sata. But it’s not unheard of for this to be an issue on single m.2 boards as well.

For the X370 pro the OP selected, there’s no mention of Sata being disabled because of M.2 usage:

although depending on temps the OP may still want to move the M.2 down to the that second pcie slot so to avoid heat being radiated off the back of the gpu, which would typically be sitting directly below the M.2 on the X370 pro. Alternatively he could move the gpu to the second slot to also avoid this issue.

All of Ryzen has ECC support unofficially, as in the entire lineup physically has hardware on package but AMD does not validate compatibility because its a mainstream targeted product and thats a prosumer+ feature.

1 Like

Makes sense. I know I’ve seen it working, but you can never be too sure. I wouldn’t put it past certain companies deliberately disabling ECC in their UEFI on lower-tiered boards.

That’s a big depending IMO. I’m using M.2 under and above my GPU. the one that’s under the heatsink gets hot air blown on it and when they’re working hard, it actually stays cooler than the one that doesn’t have hot air blowing on it because of the improved airflow over it.

1 Like

Yup it just depends on the case setup and components I guess. I had my plextor m.2 under my 980 ti for a while and it was roasting because that card uses too much power :sweat_smile:

1 Like

Yeah, it really depends on a lot. I’ve got my card under a 580 in a Meshify C. having a lower-powered GPU and a lot of airflow can make all the difference in the world.

1 Like