128Gb ECC build TR? upgrade and HDD, ZFS, wonderings :/

Hello everyone,

Some years ago, I bought a synology with 5 HDD WD Green 2TB each.
The purpose was to personal store files, and movies.

Then time past, and I things become more serious.
I needed a webserver providing services to the outside world, like comunity forums/webservers/audio/next cloud …
I felt the need to have something more powerful, but also wanted to have more fun building/managing my own hardware/system.

In 2016 I bought a PC thinking of using FreeNAS 10 as a system.
I followed the recommended specs to build the PC.
One of them was 64GB ECC Memory since it was using ZFS.

At that time FreeNAS 10 did not come out on time, and rather than going the the becoming old ‘9 version’. I decided to run an ubuntu LTS with openZFS.

current hardware configuration :
CPU : E3–1275 V6 3,80 GHz
MB : Gigabyte GA-X170-WS ECC
RAM : 64 GB crucial ECC
SSD system : Samsung 950 pro
external JBOD Enclosure : Areca ARC-4038-8
HDD : 8xHDD (Mix WD 4To and Toshiba 6To

1st point is purely a will and not a need :
I’d like to renew a bit my config…
I’m tempted by a threadripper, and this mean renew also MB and RAM
I don’t want thing to be over expensive.

2nd point HDD wondering :
Since the beginning I have a bad feeling about WD digital harddrives…
I did replaced all the green ones by red ones (more suitable for NAS), but still they die one after the other …
Meaning in 4+ years, I’ve replaced about 8 hard drives !!
Now replacing with Toshiba 6TB and have no issue yet.
Did anyone get the same bad experience with WD sata HDD ??

3rd point ZFS impact/needs :
One of the reason I also want to upgrade my config, is that I’m thinking to be more serious about cloud things… maybe trying a local cloud foundry … and/or simply run more VMs to try different things …
This being said … since the last recommendation I read from FreeNAS was to use a minimum of 64GB because of ZFS itself…
I SUPPOSE I will need to really go over, and at least reach 128GB…
What’s your opinion on this ?

Last point is linked to the 2nd one maybe…
I was wondering if the shorten life of my WD HDD was not caused by ZFS always committing on them…
Is there a way to ask ZFS to put the HDD to sleep if not used ? I don’t see anything like this in ZFS properties …
Also, maybe it would be useless since my servers/server logs are stored on Zpools … I guess it can never really be in sleeping mode …

Here we are :slight_smile:
So if I want to run 128GB ECC is there a proper cheap TR configuration you can recommend ??
TR 1st gen 1900X (8 cores) is very cheap, I doubles my core/thread count this way.
Can it run 128GB ECC if yes which Memory I should take ? (speed doesn’t matter, I’m for stability first).
And which motherboard ?
I thought about the Asrock X399 Taichi, just because I thike these Taichi boards in general … but if you have better recommendations…

Thanks a lot for your help !
Best Regards
Chun.

By posting here, I was trying to find ZFS recommendations… and couldn’t find back the 64GB minimum spec …
Here I can read that the rule is 1GB par TB of storage.

So I might still wait a bit before upgraging … but let’s see how cheap I can get for an upgrade following your advices…

Does anyone have a TR1 config running 128GB ECC memory ?

FreeNAS 11+ requires 8GB to install (checked during installation). The 1gb per 1tb of HDD space is only if you’re enabling deduplication (btw, don’t. Just don’t, there’s way too many issues with it. LZ4 works well enough). The other RAM requirement you’ll run into is if implementing an L2ARC cache, and if you’re thinking of implementing that, max out your RAM first.

2 Likes

Also, for hosting forums/webservers, if that’s what you really want, I’d recommend 2 different pools. Web/forum services, depending on traffic, should probably be run on a RAID 0/1/10 array to better serve requests, which the cloud storage can be stored on spinning rust

I’d start with 32 or 16 gigs, and buy more if you need more, unless it’s cheap where you are.

From IX systems, in addition to 8gb ram for the system, then another 4GB for VM’s:

“8GB of RAM will get you through the 24TB range. Beyond that 16GB is a safer minimum, and once you get past 100TB of storage, 32GB is recommended. However, that’s just to satisfy the stability side of things.”

check out this page from IXsystems, about 4 paragraphs down:

https://www.ixsystems.com/blog/a-complete-guide-to-freenas-hardware-design-part-i-purpose-and-best-practices/

1 Like

There’s a lot of FUDD floating around when it comes to ZFS, much of which is only correct in very special circumstances. This is due to difference in needs between some guy with a desktop and enterprise level workloads with unique use cases.

I’ve seen ZFS seemingly happy down to 2gb of ram, though I’ve also seen reported issues though the circumstances in them are not clear or something I remember. I’d personally consider 4gb the “minimum” for having it run without problems with a small pool, but don’t expect much performance benefit from ARC. I ran a ~20TB NAS with ZFS on only 8gb for years, though I wasn’t doing VM’s.
DON’T EVER enable deduplication. Enable compression (lz4)

As far as how much you’ll actually need for a good time with lots of VM’s, I couldn’t tell you as my use case (storing data which doesn’t change much) is much different. You’ll have to search around for homelab-er posts to see how much they are consuming with what they are doing. I personally wouldn’t have under 64gb if this was my plan, and this is because of the VM’s, not ZFS which probably wouldn’t benefit from more than 16gb. Some people say that “unused ram is wasted ram”, but some people are also fucking retarded so give yourself plenty of leeway otherwise you could burn money pointlessly. Probably a good idea to only use the highest capacity sticks available, even if you aren’t making use of all the channels initially, the performance loss isn’t going to be that big a deal for non-gaming systems.
It’s probably a matter of:

  1. Do some testing to figure out what you need
  2. Fucking double it.

For hard drives I shuck 10TB WD elements (which get down to about $160 every so often), and have some HGST and shucked easystores mixed in. Also have some older 8TB drives, also HGST and WD elements now in a backup array. Not one has failed yet. The big thing here is that these high capacity drives are for all practical purposes coming from the same “faucet” as the enterprise stuff, not unlike what AMD is doing with their ryzen chips. There might be binning or slight differences, but the core of these is all the same, which in practice has yielded very similar reliability. I have zero qualms about 8-10TB disks right now, though I would be suspicious of 4TB and lower.

I know people try to squeeze IOPS out of spinning rust by RAIDing a bunch of lower capacity cheaper drives, but even that is still orders of magnitude worse than an appropriate SSD (meant for sustained performance, so NOT a samsung EVO).

For this use case, L2ARC is worth looking into.

1 Like

I’ve run FreeNAS with 2-10 GB of RAM since 2012 as my home NAS without issues. I noticed no performance difference to speak of in a single/small number of users environment.

I’ve also run varying capacities in my work lab environment as a poor-man’s NFS VM datastore.

You do not need 1 GB per TB as above.

I’d suggest buying 8-16 GB for ZFS (FreeNAS will complain without 8 GB) and then work out how much you want for VMs and add that to your RAM capacity.

That said, more RAM will make ZFS go faster, but there’s a point of diminishing returns; in an environment with a small number of users (e.g., home) - you simply can’t get as much benefit from caching as typically you aren’t hitting the same data all the time - and unless you’re really lucky you’re running 1GbE at home anyway, and a pair of hard drives will saturate that pretty easily unless you’re smashing them with a lot of random IO.

So sure… in an enterprise environment, where you have a lot of users hitting the box, RAM cache will help a lot. Especially if you have 10, 40 or 100 gig networking to the box. But in a small home lab… not so much, normally. Especially if you have only 1 GbE. Chances are, in a small home environment you will be network bandwidth limited.

So. Start small with RAM. Test, then identify the bottleneck. Don’t just spend a small fortune on ECC RAM because the old “1GB per TB!!!” “rule of thumb” exists, and is there for people running de-duplication, which should be limited to extreme edge cases, if at all these days.

The BIGGEST performance issues you may run into with ZFS for a VM store are (would recommend you google these):

  • making sure you set your pool to ashift 12 (i.e., 4k block size) to match modern drives (otherwise, each write to disk for a virtual 512 byte block on a 4k native drive will turn into a read/modify/write at the drive firmware level). make sure to check this early, as changing this value is a pool destroy/rebuild
  • making sure you turn off access time tracking on the pool/data-sets so that each read does not generate a write to track access time. this one will murder VM performance.
  • picking a stupid VDEV configuration. read up on VDEVs and the performance impact(s) of RAIDZ-X vs. mirrors. And understand how striping across VDEVs works at the pool level. Again, if you fuck this up, it is pool destroy and rebuild time.
1 Like

Hello and thank you everyone for your feedback.

So Yeah as I thought after posting my first reply, it seems should be perfectly fine with 64GB, and if I can afford it I’ll enjoy 128GB, but it’s no more a priority.

I knew the issue regarding L2ARC, and I’m not using it, and yes I have a specific pool for each workload.

Now, maybe the most important for me is the following…
Does anyone can provide me with brands/part numbers of

  • motherboard
  • 128 GB ECC memory
    working properly with TR1 ??

Or for better compatibility, and if I have to pay more then, shall I wait for TR3 ?

1 Like

I built my own NAS in 2012 using Fedora Linux and btrfs. I used two WD Red 2 TB drives.

I had to replace both of them by 2016. I had upgraded everything to bigger drives and now I’m running 4 6 TB WD Reds and 2 4 TB Toshibas.

None of my 6 TB Reds have died yet or had any errors.

One thing that can kill hard drives quickly is heat. If you lost 8 drives in 4 years there’s something really wrong. Make sure you have airflow over those drives.

1 Like