Brass tacks, L1 forum… how important is the 1gb:1tb NAS ratio *really*?

I recently upgraded my CPU from an old i5-6500 I salvaged from cast off office e-waste during the Windows 11 upgrade frenzy. Currently paired with that CPU is a single stick of 8gb 2133 DDR4 populating 1 of my 2 dimms, and a 128gb SSD. The 1tb hard drive it had is being retired to an external enclosure for infrequent, I-really-just-need-4-copies-of-this-file backups.

And what am I to do with a retired computer but make a NAS, of course? I’ve got TrueNAS Core installed on the machine, but haven’t pulled the trigger on any hard drives yet. There are 3 hard drive bays I have access to, and 3 empty SATA ports on my motherboard. 3x6tb Western Digital Red refurbished drives are looking pretty appealing.

Running 18tb of storage (12tb available) on a 1x8gb machine raises some red flags in my head, but I’ve never had a storage server before. I would like to run Tailscale or Wireguard on the NAS, but my chief concern is just hosting media on my local network. It’ll just be accessed on my desktop and phone.

My query: Am I overthinking this? I’ve heard TrueNAS can be memory hungry. Is it worth the investment to go to 16gb, or even greater? Would my money be better spent investing in a ZFS metadata special device, or a pair of 2.5gb network cards for the NAS to connect directly to my new PC?

I’d probably go with 16, but don’t worry about the whole gigabyte per terabyte thing.

Generally speaking if you’ve got more memory, ZFS will use more memory for arc, but unless you really go nuts with transcoding or something 16GB will probably be fine.

And stay away from those SMR drives!

6 Likes

Everyone, especially these days, will tell you that the 1gb per 1 tb thing is not a strict rule. For your purposes… I would say get the 16gb of ram and skip the zfs special device. The reason ZFS is so memory hungry is because it’ll literally use as much memory as you throw at it. Also, more memory will provide a better user experience. It’s just my $0.02 so do your research and determine what is best for you.

3 Likes

People are probably just confused about how linux and zfs work, thinking that because memory is used, it’s “memory hungry”.
It would probably perform fine even with just 8gb. And you know what? You can always try it out, and if it’s really really unreasonably slow, get more ram.

1 Like

In addition to what has already been said, it is important to understand exactly why ZFS can be ‘memory hungry’.

Simply put, the more RAM you have the more can be used for the ARC. You want as big of an ARC as you can get because it allows your system to cache more of the data that is read from disks in RAM. If the data that is stored in memory is requested again, your NAS can serve that data from RAM rather than having to read it from the disk with a drastically slower throughput and drastically higher latency. The longer your host is up, the more the ARC algorithm can tune itself to make sure that the important data that is accessed frequently is cached in RAM. And in the case of HDDs, reads from memory instead of from the disk mean less mechanical work for the disk, which adds up over time and can increase the lifespan of the disks because they don’t have to do as much.

Personally, I populate my servers running ZFS with as much RAM as they can take because you always benefit from more ARC space. I always recommend others to do the same, at least as far as they can afford to. It will work fine with less as long as you meet the OS’s minimum requirements, but if you can afford the RAM and the slight increase in power usage, why not realize the benefits of a bigger ARC?

1 Like

It’s tacks, brass tacks.

https://en.wiktionary.org/wiki/get_down_to_brass_tacks

1 Like

It depends on purpose. If your just using it for storage. Maybe have an l2arc or dedicated slog. You absolutely dont need that amount especially if your not running dedup

ECC is nice to have

1 Like

huh, TIL!

1 GB per TB is ONLY FOR RUNNING INLINE DEDUPLICATION

its a rule of thumb so you have enough RAM to hold the deduplication hashes table for every unique block in the FS in RAM. Otherwise IO performance falls off a cliff - because every write needs to look up the deduplication hash table.

If you are not running deduplication, then you do not need anywhere near that amount of RAM. In fact, if you are not doing deduplication, these days ZFS isn’t really any more memory hungry than any modern FS that caches things in RAM. Well… slightly, but 1-2 GB for ZFS without deduplication is plenty.

As always if you have many network clients more RAM = ZFS goes faster. But it isn’t a hard requirement.

4 Likes

TrueNAS itself requires 8 or 16GB of RAM for support purposes. If you have less, they’ll just ignore you. In reality, it shouldn’t use that much memory. I haven’t used it in ages (since freenas days), but it should be fine.

My NAS running FreeBSD has a 10TB (usable) hdd pool and 2TB (usable) flash pool. It runs on a rockpro64 with 4gb of RAM. My backup server is an odroid hc4 with 4GB of RAM with a 20TB hdd pool.

I wouldn’t run ZFS on 2GB of RAM, I’d keep at least 4. The size of the pool and ram don’t matter, unless you go for high performance NAS configuration (enterprise stuff). The 1gb per TB is for dedup. Without dedup, going with 32 or 64GB of RAM can be beneficial depending on the amount of users (20+ accessing the same stuff), but again, you can run it with less even for many users, just that the performance will be that of the actual storage medium.

No.

Maybe. But for single user (arguably even more users, like 5-10 users), you’re going to be served well by default configs, as long as you configure your pool ok (mostly applicable to RAID-Z configs, if you go mirrors or stripped mirrors, it doesn’t matter).

Nah, you can use gigabit and use a switch anyway. But a 2.5GB switch would be nice (like the mikrotik crs310-8g-2s-in I have, or the zyxel xgs1210-12). The idea of a NAS is that it should be accessible network-wide (DAS suck).

2 Likes

It all boils down to use case, if it’s “just” going to serve as a NAS you’ll likely be fine with a setup like RockPro64 which tops out at 4Gb depending on what board you get.

Looking at current stats on by “server” (FreeBSD 14):

ARC Size:                               98.84%  7.91    GiB
        Target Size: (Adaptive)         100.00% 8.00    GiB
        Min Size (Hard Limit):          49.65%  3.97    GiB
        Max Size (High Water):          2:1     8.00    GiB

ARC Efficiency:                                 261.90  m
        Cache Hit Ratio:                96.76%  253.43  m
        Cache Miss Ratio:               3.24%   8.48    m
        Actual Hit Ratio:               96.76%  253.43  m

This thing runs a bunch of services and also Poudriere builds so it’s not your average box and here are current stats with no Poudriere jobs running fron top:

Mem: 74M Active, 1683M Inact, 14G Wired, 104K Buf, 108G Free

In short, you’ll be fine with 8G, 12+ is nice since you’ll likely better be able to cache more data and possibly comfortably run a few more services (if wanted).

2 Likes

I’ll also add:

For most people, turning on de-duplication is a mistake too. Compression? Fine, turn it on there’s basically no real performance penalty on modern CPUs anyway even if the gains aren’t much in terms of capacity. De-duplication is normally very much a losing proposition unless you’ve got a HEAP of duplicated data due to say (for example): a bunch of end users storing the same shit in every home folder shared out from the NAS.

I’ve run FreeNAS/TruenAS for years with 2GB of RAM on an old HP N54L Turion based machine.

I added 8 GB to it and noticed basically no difference in performance for my single user purposes. Exception with 10 GB total i no longer got complaints from the machine that less than 8 GB was “unsupported”, and could run a few plugins on it.

I recently (1-2 years back) upgraded it to a Ryzen based setup with 32 GB of RAM and guess what - for my purposes, essentially no different. Both boxes would saturate 1 Gb Ethernet or WIFI 6. I can just run a lot more VMs and stuff on it now.

Big ARC caches, etc. work great if you have many users hitting the machine. If you only have a single user or small number of users the chances are you’re mostly NOT hitting the same files all the time anyway and ARC is of limited use.

Remember: ZFS is built for enterprise to handle hundreds or thousands of concurrent users on a single box with spinning disks. Back when SSDs were so expensive they needed to be used for cache in high end file server only (these days you just fill the enterprise NAS/SAN with flash and be done with it).

The caching features, L2ARC, log devices, etc. are massive overkill for the typical home user NAS box storing a bunch of archive type data or backups. Or even for streaming media.

If you’re running a VM workload on it, things are a bit different, but if all you’re using it for is a replacement for a home user style NAS, you really don’t need to deep dive into system requirements and throw shitloads of resources at it.

3 Likes