Hey guys, I’m looking to setup a home server using TrueNAS Scale in the next 2 days, this is migrating my current server. Here’s the parts I have:
Still waiting on the RAM and I don’t need the 10G networking atm. However I’m concerned about my setup using a metadata cache. I was intending for one of the optane drives to be a boot drive for TrueNAS but it seems like these would be better for a metadata cache at this point. The SN850 was intended to be a cache for the ZFS array in general. I’m not even sure if that’s how this works? I might be getting things mixed up and think I may need an additional SSD for a boot drive.
2x Optane for Metadata cache
1x SN850 for cache
1x boot drive?
I also wanted to have an SSD for fast storage that is external to the ZFS pool, which means now I’m completely screwed and need an addon card. Does anyone see any mistakes above? Want to make sure I’m planning this out correctly since ideally this is up and running by this weekend.
I have no idea what metadata cache is.
There can be special vdevs storing metadata and small block files only but those are not caches (losing those would break the pool).
There can also be L2ARC which only contains data blocks evicted for ARC (RAM) but metadata remains in RAM.
I have no idea what metadata cache is.
There can be special vdevs storing metadata and small block files only but those are not caches (losing those would break the pool).
I believe that’s what I’m referring to: ZFS Metadata Special Device: Z there is a thread about it here, the way I understand it is a directory of the ZFS array is stored on the SSD to speed up browsing.
But that would take two of the 3 m.2 connections (assuming I put the Optane in a mirror like suggested above) on the board which doesn’t leave space for both a cache and a boot drive.
Beware that that special device is not a metadata cache.
It’s a pool critical device which is lost, would render the whole pool useless.
So that such special vdev should be setup as at least a mirror.
Though unless you already know what kind of op and performance you would have, better monitor with default/minimum setup first and add those special devices later.
(You can still connect the drives but not add them as vdevs yet)
Alright, since it seems you can do this at any time I’ll set it up without the metadata layer, I’ll just use one of the optane I have as the boot drive for now and get some more in the future when I better understand how ZFS works and how that metadata layer works.
I think a mirror of optane should be sufficient, once I do decide to set it up since their incredibly low failure rates.
You can also store small files on the special vdev. I personally have millions of them and it speeds up the pool far more than the metadata (which is mostly stored in ARC anyway). So a TB of special vdev has it’s value, my vdev has 420GB allocated as of yesterday, 70GB being metadata.
You want to use NVMe on L2ARC and use the SATA SSDs for special. Special doesn’t need much bandwidth as it’s mostly about fast access time. But L2 you want bandwidth because that’s housing actual large user data
But L2ARC might be rarely used (depends on actual L2ARC hit ratio, which largely depends on system setup and workload)
Metadata and small block files might require more performance (IOPS & latency)
L2ARC is always in use unless you access only data that is within the ARC. Thus should be the fastest device. special and data vdevs only come into play if ARC+L2ARC don’t have the data stored.
Also don’t get fooled by the hit rate statistics from ZFS. The calculations are wierd and misleading at best. ARC hit rate of 99.6% meant nothing when I was reading 150GB of audio files yesterday. Everything was pulled right from L2ARC and my special and HDDs didn’t get a single IOP. L2ARC hit rate stated 15% or 20% or so. It means nothing.