How many disks for a TrueNas system?

Would putting one of those fan modules in my case keep the noise down?

I’ll be replacing all of the fans with Noctua equivalents too.
I’m thinking the rack layout has (1u each):

                  |- shelf
                  |- switch
                  |- fan module
                  |- switch
                  |- fan module
                  |- storinator

I also have fan mounts on the top and bottom of the case.

Thinking about adding something like this. Might do an 8 fan model. I could use an Aquero to set them to a constant 600 rpms to test how it impacts the other fan speeds.

If you don’t have a way to remove the hot air from the room/closet/batcave, then I doubt the fan modules would really do much.

It might take longer to reach equilibrium with the air being pushed around more, but you’ve got to get the heat out of there eventually to really make a difference.

Well, I’m thinking I use top of the line server cooling components, and keep air flow going over everything. I can add a horizontal cooling unit if needed. Trying to quiet this down as much as possible through more efficient cooling. Mainly, replacing fans and adding fans. Not sure, how it much it would all impact fan speeds.

I just don’t know about adding sound dampening. I could add some to my current cabinet. The sound dampened server racks are all pretty expensive, like cost as much as an entire $7k to $10k enterprise server.

You don’t need 1GB per TB of storage unless you’re doing de-duplication.

The ZFS RAM requirement isn’t a requirement for CACHE, it’s a requirement to store the de-duplication sha256 hashes for every block on the filesystem and is a rough rule of thumb, based on the premise that for X TB of storage, there’s likely to be approximately Y GB of sha256 hashes for unique blocks on the filesystem. It’s a bit of a guesstimate and YMMV depending (I think?) on how unique your data set is (more unique blocks = more different hashes).

Essentially the same sha256 hashes that ZFS uses for data integrity are used for checking if a block is unique or not, and if the hash has already been seen - it’s considered a duplicate block. Doing that check requires a sha256 hash look up for every single write to the filesystem.

The reason ZFS needs so much RAM for de-duplucation is because it is done live, in-line, unlike some other platforms where de-duplication is done via a scheduled job.

Not having the DDB / hash table in RAM, ZFS would need to go to swap to search the DDB potentially for every block write which would (and does) have obvious catastrophic consequences to performance. As in, you won’t just notice things being a bit slow… performance would drop to unusable once the quantity of unique block hashes no longer fit in available RAM.

If you aren’t running deduplication, and 99% of people should definitely not be doing this in the age of cheap storage - you don’t need that much RAM for ZFS.

You especially shouldn’t be doing de-duplication for a home setup where you’re not doing things like storing data for thousands of same-business users where there’s a high chance of highly duplicated data being stored.

Outside of Deduplication: More RAM will of course make ZFS go faster but it is diminishing returns, and in a low user count environment (e.g., home), you’ll probably find it uncommon to keep hammering the same data (especially if it is for streaming media for example), and if so that HOT data set is probably quite small anyway.

1 Like

I’ll have gains from deduplication since I follow an hourly, daily, weekly, and monthly backup regime per system. The only real use for cache would be speeding up restores. Saving snapshots is usually very quick, unless it’s the first time.

You will have gains, as in you haven’t turned it on and are guessing?

You can run a zfs command (forget the exact command) to determine just how much you would see in savings from deduplication.

Do that first, most of the time it isn’t worth it.

Snapshots in ZFS are already “de-duplicated” effcetively without de-dupe (i.e., without the de-duplicaton option being enabled) due to just using reference counting to the original blocks until they are modified on snapshot creation. They’re essentially instant because no data is copied at creation time (like in dumber platforms). Newly modified blocks just get written to a new place.

repeat for emphasis: if you’re using ZFS snapshots - you don’t need de-duplication to save space.

in-line de-duplication is something else though.

2 Likes