Starting My First FreeNAS Build, Thinking Ryzen Any Advice?

I have been wanting to create my own NAS for a long time and recently saw a sale (or discounted refurbished hardware.) I just got my foot wet and bought 8x4tb 2.5 HDDs. Now when I sleep I dream of electric small form factor Mini-ITX builds.

Since the hard drives are refurbished I would plan to setup the ZFS raid to accept 2 disk failures. Leaving 24tb of storage on the NAS.

My overall goal is to have a small system that would support a few plex users, store my linux & windows home directory, Media backup/Storage, use jails, docker & VMs and have future expansion capabilities.


Overall build components (please mention a section you are giving advice on in replies, Thanks)


Hard Drives

Main concern is early failure rates with these refurbished hard drives. I want to do an appropriate burn in test to check that these drives won't die early. So far I've started smartctl long test and once that is done I will run badblocks on each drive. I also used DBAN to wipe the drives as a stress test but writing to all 4tb takes a while. Doing tests from linux usb dongle or boot cd because my personal rig has all 8 sata ports utilized for doing tests. Other tools/tests I can perform on linux would be helpful.


Case/Form factor

I was initially inspired by SilverStone's CS280 case because of the small form factor and hot swappable bays. I care little about how cramped it would be to build in but I am worried how well thermals are for this case. The case only supports Mini-ITX and there are not many options with Ryzen at the moment. With that in mind I am also considering the SilverStone SG11 and SG12 since Micro ATX will give some PCIE expansion capabilities.

A Mini-ITX build would be easier for me if I could setup PCIE bifurcation or get sata multipliers. It seems that bifurcation can work on some boards but manufacturers don't really support it. Getting a sata multiplier sounds great but the cheap ones only do sata II and split one port 5 ways that would bottleneck the drives that can do 130mb/s max to about 50mb/s. The lowest sata III model on amazon looks like a good price and would have 100mb/s bandwidth if all drives are running. (chose to move in a different direction)


Motherboard

This is the hardest choice for me personally because I would really like a Mini-ITX board but there very few choices and finding one with 8 sata ports just won't happen. I plan to make this a headless system so no GPU needed. Problem is if I grab a Mini-ITX I'd probably have to use the only PCIE slot for sata expansion when I might want to use it for something else later.

I've heard ASRock or Gigabyte X370 can do PCIE bifurcation and I would love to set up something cool with a riser card. A sata multiplier might fix this problem for me but the AM4 Mini-ITX boards are all out of stock at newegg and I'd like to see the options other companies are planning to release. (Links to unreleased boards appreciated)

If I step into the world of Micro ATX I find the MSI B350M Mortar that seems to check all the boxes for what I need. It will still need a sata PCIE expansion card or a sata multiplier.


CPU


Memory

Note: No longer looking for ECC ram I have some quality laptop ram that should work great in the board I am looking at.

I was looking to just get the cheapest ECC DDR4 kit on newegg. I'm planning on using ZFS deduplication feature and that requires a whole lotta ram. My concern is If I have 24tb of data I'd needed 28gb of ram to support this.

First option is grab as much ram as possible and max out a system with 32gb of ECC ram. I'm very tentative about that because of how much it hurts my bank account. Second is get 8gb of ECC and use my Samsung 950 PRO M.2 for a swap drive to prevent a system lock from thrashing. On my personal rig it has been a great solution for programing with large data sets where I have 16gb ram and 16gb extra swap on the NVME drive. I can do work on 32gb of data fully utilizing my 4 cores without having my system lock up and crawl to a halt.

If anyone knows this would work I'd be real glad to hear it. I was initially planning to use the NVME drive as a cache. I'm worried there would be an issue if the cache drive cannot be partitioned to be part swap for when the ram overflows into the page file.


PSU/UPS

Thinking 80+ gold or better for PSU that is modular If I could grab something cheap from a retired ethereum Miner that might be the best value. For a UPS I have no clue what would work with FreeNAS and shut down or suspend during a power outage. Does not need to be up 24/7 so something low power that will integrate with the NAS and allow a graceful shutdown is all I need.


I'm probably thinking of a system that will be overkill for my use case. If possible I'd go for a system with something like an ASRock j3455m then upgrade in the future when I know I need a more powerful system and make a HTPC from the old parts. Just worried that system would be underpowered for the here and now.

If you have comments advice or warnings that I'm french frying when I should be pizzaing then thanks for leaving a comment and helping to prevent a bad time.

Edits: Add new hardware based on responses.

I would personally set a drive or 2 to the side or get some spares incase one of the drives do fail. It takes a while to rebuild a drive and it leaves you open to failure and potential loss of data.


Any case should work as long as there is ample airflow over the drives. This will help prolong their life by keeping them cool. You should not run into many thermal issues with the rest of the system as long as your dont do any extreme overclocking


Make sure your motherboard supports ECC. ECC is not officially supported on ryzen and some boards do and do not support ECC memory. Might have to contact the motherboard manufacturer for more information on the support of ECC


1700 will be a more than overkill for almost any kind of home nas even without any kind of overclocking.


You will need ALOT of ram for ZFS. 8GB + 1GB for each TB of storage is the general rule of thumb. I personally wouldnt reccomend going for the cheapest thing you can find because of the whole "you get what you pay for" thing


Any psu will work as long as its got enough sata connectors will work

Good luck on the NAS

1 Like

I'd make multiple alterations.


CPU - Instead of a 1700, I'd snag a 1400. The 1700 is beyond massive overkill. I keep my PfSense instance and FreeNas setup housed on one box under ESXI, and that's running a sandy bridge quad core Xeon with Hyperthreading, and its massively under utilized. If you plan on doing a lot, and I mean a LOT, of plex transcoding simultaneously, then maybe there's a case go for a 1600. But a 1700 is like killing an ant with a handgun.


Memory - ECC is nice to have with FreeNas, but its not really do or die necessary. Here's an video explaining why, in better terms than I can:

I'd recommend 2x16GB sticks of memory, and if ECC is massively more expensive, than skip it.


Case - I don't really like the Silverstone nas-oriented cases. Some of them may be good, but every instance of them I've seen has resulted in really hot running hard drives. Not my favorite. I like the Fractal Design Node 804, since it spaces out the drives a ton more and will result in much lower temperate hard drives.


For connecting hard drives, don't get a sata multiplier, get a HBA card. Everyone and their mother will tell you to avoid any raid/hba card for ZFS at all costs, and in a lot of instances they're right, but there are definite exceptions when its fine. Personally, I use and recommend using the LSI 9211-8i HBA card in IT (HBA) mode. It's highly recommended and validated with FreeNas, and its fine. You can find one here:

And you can find cables for it here:

I use the exact combination of 9211-8i + sas cables + FreeNas and it runs great.

5 Likes

A few Tips from a long time FreeNAS owner/builder:

  • I'd go with intel, and scale down the horsepower on the CPU. You don't need a 1700. You don't even really need 8 threads most of the time.

  • It'll be running headless 99% of the time, but having an igpu if shit really hits the fan is nice. FreeNAS tends not to play nice with AMD gpus of any kind.

  • While having ECC ram is nice, the "Scrub of death" scenario proposed by armchair sysadmins IS A MYTH. Defective ram will not write defective data to disk on a scrub. That's not how ZFS or the Scrubs therein work. It's either misguided or FUD, not sure which, but it's not something you need to worry about if ECC is entirely out of your budget (though the money you save from not buying that overpowered 1700 should leave enough on the table for it.) I'll say it again just to make sure: ECC is not a prerequisite for ZFS, and ZFS is not made any less reliable than any other Filesystem in the absence of ECC. The Scrub of Death is a wives' tale.

  • Get a small SSD for an L2ARC. They take like 2 commands to set up, and give you a nice boost in performance and data ingestion/serving. You can even partition the install drive if you can't pony up the extra 40 bucks. Adequate RAM + L2ARC is better to have in a lot of cases than swap, I'd use the 950 M.2 at least partially for the L2ARC

  • ODM matters a lot more than the rating or brand name of the PSU you choose. Check JohnnyGuru or another PSU specialist's review out before committing to purchase. Generally Seasonic and Super Flower are considered the ODMs to beat.

  • Have a gig of ram per TB of storage, 100GB of L2ARC, and one or two on top for the base system.

  • For the love of god, just get an intel based nic if you need more throughput than what comes onboard. It's not worth the trouble to cheap out, and you can pick them up used for a deep discount.

  • Refurb HDDs are a non-starter. Not saying they can't be good value, but a lot of sellers online will falsify SMART data and sell drives on their last legs to unwary buyers. Be careful, and expect to lose data if you buy them in volume.

  • Don't use molex to SATA adaptors unless you absolutely have to, and even then, only use the hard plastic "clip on" style ones with thick gauge wiring.

  • Cheap HBAs can work if you can't afford a proper SAS/SATA controller card, it's a bit luck of the draw on quality control though. ALSO: DO NOT USE HARDWARE RAID WITH FreeNAS. You want ZFS to have direct access to your drives for redundancy reasons.

  • I know it's bigger, but my favorite (cheap) NAS case to build in is hands down the In-Win Stealth Bomber B2. 7 sanely laid out hard drive bays with enough room for another 5-8 via hot swap adapters, no RGB nonsense, and it's only 40 bucks.

1 Like

Thanks for all the great advice.

I hear everyone loud and clear on the CPU being overkill, not needing ECC mem (shout out to Mark Fur. he does some great videos) and the cases I had mentioned needed better HDD ventilation.


  • CPU Overkill

Based on the compute power needed by the examples given of how stressed the system would need to be to make Ryzen worth it even the 1400 would be overkill for my needs. I have taken some time to rethink the build and I'm gonna go with ASRock J4205-ITX.

Right now I have maybe only a terabyte to store so I think setting up raid 10 with 4 drives and 1 or 2 plugged in as failovers would be best. It will give the cpu less work since it is no longer computing the parity drives and hopefully cut down on the drive rebuild time. I'll have 8TB of space to slowly fill and I can max out the ram with 16gb easily.

I'd like to make the m.2 e key socket get an adapter to turn into a PCIE slot since I don't need the wifi. The googles tell me it is possible but have not found a direct adapter from m.2 e key > PCIEx2 only things like m.2 e key > mPCIE > PCIEx2.


Thanks for the case input but I am really set on it being very compact. This case might be smaller than all the silverstone cases I had looked at but it is much better for keeping a NAS cool with a 140 fan blowing on the drives.

@tkoham recommended not using molex adapters or anything with a small gauge wire. The hot swap bay PCB seems like it just uses a trace from a molex to the sata drives. It would probably be more reliable to unscrew that board and plug each drive in directly.


  • Drives

After Testing the drives for a few days I found more flakey drives than I had hoped for. If the retailer runs out of drives then I'm hoping to get reimbursed instead of deal with a restocking fee or get sent more drives. If I mirror the refurbished drives with these other 4tb drives when setting up the raid 10 config that should give me some protection.


This has been really valuable if not just to help me get over the sentiment I want to build something cool. The system should now be more in line with the power consumption,necessary performance and overall price for what I actually need. Especially since I have some unexpected car maintenance this week.

1 Like

note the 1gb per tb rule of thumb is effective storage, not raw.

e.g. an 8tb raid with 16tb of redundant storage drives only needs 8gb ram.

also, as long as the hot swap board isn't dead cheap, it's probably fine. the reason people recommend against the molex to sata adapters isn't because they're fundamentally flawed in some way, it's because there are chinese race to bottom attempts that use insulation melting overmoulding processes and wire too thin for anything but data carriage.

The power planes on that pcb have a much lower risk of shorting onto each other or catching fire.

Well aware, but that would leave no OS ram. On a FreeNAS video wendel suggested having 4gb available for the OS and that is the rule of thumb I was going by.

Cool, I know some people get confused at that definition, just thought I'd throw it out there.

2 Likes