Advice on building truenas server with 16 hotswap bays

Hi there

Looking for advice to rebuild my truenas server. I am switching to use it solely as a nas with no vm or container functionality.

I have a 16 bay case and a 9305-16i card on an asrockrack 570 board with 10gig networking and 2 nvme on board

Really looking for what you would consider to be best practice wrt vdev layout, raid levels etc. Also would be interested to know if a special vdev would be useful.

I do lots of video work and also general development So a combination of large and small files with a need for random as well as sequential read and writes.

So really general purpose NAS only with a good balance between speed and redundancy. Overall capacity is a medium priority.

Thanks for helping

I mean, it depends on your expectations, but with normal disks and good random read/write performance, there is only one option and that is RAID10 with SLOG and special device. You can test RAIDZ2 with Special device and disable SYNC, maybe that’s enough for you. I assume that you don’t need random IO and sequential at the same time, then you have to create two pools anyway.

I have ten 12TB 7200rpm disks in Raidz1 with 2x NVME special device, cache NVME and lots of RAM. SYNC is disabled and as compression LZ4.
Raidz1 is a bit risky with ten 12TB disks, but I replicate every other day to a cold backup system, which is much saver than RAIDZ3 without backup :wink:

I went with 3 vdevs of 5 drives for my latest build

image

Is the recommended drives for reach RZ1-3 so just cut your drives up and fit one of those if you are ok with the loss.

Thanks both

1 Like

I really, really like my special vdevs. My current setup:

  • 4x12TB HDDs (mirrored vdevs)
  • 2x1TB Samsung enterprise SATA SSDs (mirrored special vdev)

The special vdevs fit all my metadata as well as some datasets where I’ve configured special_small_blocks so blocks/files below a certain size go straight to the SSDs. You can set that size on a per-dataset basis, which is great. My server has one big pool but special vdevs let me use it for different use cases:

  • Small, high speed datasets: set special_small_blocks = volblock size, and it’s like your pool is all SSD. My home server boots from a dataset like this and behaves just like an SSD pool.
  • All other datasets: set special_small_blocks as high as you can but below volblock size, and you’ll get HDD performance and capacity, but very small files and metadata in that dataset will be accessed much faster.

For video work, having a special device would accelerate access to very small files and metadata, but your big files would perform about the same. It’s definitely nice to have but if the “slow” part of your current NAS is when accessing large files in a big dataset, special vdevs won’t really fix that. Instead, fix that with things like:

  1. More suitable pool layout. It’s very hard to give general recommendations that will apply to your needs specifically. I use mirrored vdevs for the convenience of being able to add/remove pairs as needed, and that tends to be speedy especially when writing. However, you leave a lot of potential capacity on the table and for large disk sizes, RAIDZ configurations can be more reliable. If you have the luxury of time and spare HDDs, there is no substitute for setting up a test pool and benchmarking different setups using your use case.

  2. Maxing out your RAM on the zfs server. This will accelerate the 2nd time you access data, so the initial load is limited by the pool speed but for awhile the data will be cached in RAM. If you can get enough RAM that you can fit a video project mostly/all in RAM, the performance should be fantastic

  3. If you have diverse needs that can’t be met well with a single pool layout, revisit #1 but use different pools for your dissimilar needs. For example, I’m looking at splitting my current HDD pool into a large RAIDZ for high capacity, while keeping my backups on a mirrored vdev pool for better reliability.

  4. If you’ve already done the 3 items above and aren’t happy, consider an L2ARC. This doesn’t perform nearly as well as RAM, but if you already have the max possible RAM and have the best pool layout(s) for your needs, consider 1 or more NVME drives with:

  • Enough capacity (between all drives) to fit all the data you’re working on, plus some margin for inefficiency
  • Fast enough write speed to be able to react to you changing what data you’re working on, while also being read to at high speeds at the same time. For example if you have a 1TB L2ARC but can only write to it at 1GB/s, then the odds that the data you have on the L2ARC is “stale” is high.
  • Enough write endurance to cope with being rewritten continuously for your intended service life.
  • Fast enough read speed that it’s way faster to ask the L2ARC device(s) for the data instead of going straight to the pool. If you have “n” HDDs in your pool, aim for a sustained read speed far above the throughput of “n” HDDs.

L2ARC devices are generally striped (i.e. RAID0), so the speed and capacity of the individual drives are simply added up. ZFS will safely ignore a failing L2ARC device after too many errors, so it’s not critical to have redudant L2ARC.

2 Likes