Zfs pool config

So I have found myself as the new owner of 16 x 2tb sata ssd’s. I’m planning on setting these up in my TruNAS box via an LSI 9300 16i and am looking for suggestions on vdev structure.
8x2 stripped mirrors - 16tb usable - fastest but smallest usable space
4x4 raidz1 vdev - 24tb usable - least failure tolerance
2x8 raidz2 vdev - 24tb usable - long resilver time

These are used micron 1100 drives

Personally I’m a fan of RAIDZ2 for most things, but it depends on what your use case for this is. If you need IOPS or better redundancy, do mirror, or if you need the extra capacity, RAIDZ2. In either case, I wouldn’t recommend RAIDZ1 given they’re used drives.

1 Like

75% storage efficiency and still 4 vdevs for writes and IOPS. I’m usually the mirror guy, but 4 vdevs with SATA SSDs will be fast, so a 3+1 config is totally fine.

SSDs aren’t HDDs. There is way less mechanical failure, so best practice pool config we use for HDDs don’t really apply. I prefer 2+1 Z1 (3-wide) for a binary number of “data disks” and the extra vdev you get.

Mirrors for max IOPS, Z1 for borrowing best of both worlds and wide Z2/Z3 for “oh why is the HDD pool of my neighbor faster?”

Narrow RAIDZ vdevs are faster on resilver and SSDs are pretty fast in general. Resilver will be done in no time. And you have two+ HDDs as backup scheduled for periodic zfs send replication, so everything is fine.

1 Like

The answers to these questions will determine your path

  • what are the consequences of pool failure/downtime
  • is this pool backed up regularly
  • how much capacity do you NEED
  • how many IOPs do you need? are you throughput limited elsewhere (e.g., gig networking, low number of concurrent users)?

If you don’t need as much capacity as possible RAIDZ1 is out, which leaves either mirrors or RAIDZ2.

I"m a fan of mirrors due to simplicity (and relatively low storage capacity requirement) and speed (sometimes I will use for local VMs) but if you’re using this for archive mostly (and even if you’re not, but only a small number of file share users) RAIDZ2 will likely outrun your network.

Mirrors are also good because you can (I’ve done it many times) upgrade one VDEV at a time. Yes, unbalanced blah blah, but if funds are tight and you’re out of space and NEED some more urgently you can upgrade 2 drives and get out of the poo. Good for home use. Otherwise you’re looking to upgrade a full set of drives for a RAIDZ2 at a time, which is significantly more money…

Of course DURING that upgrade of each mirror you run the gauntlet of having the remaining drive in the mirror fail whilst resilvering to the new one) but… home user… I’ve gone through 4 sets of drive upgrades on my dual mirror so far without issues - YMMV. All my important stuff is synced elsewhere.

if you’re doing local VMs or local DBs on it (i.e., not over the network) mirrors for the win. but local file share for a small number of users (e.g., home or small 10 person office doing “General office stuff” - even with spinning rust) you simply won’t require the IOPs that mirrors will gain you.

2c.

2 Likes

Should have mentioned, running on 10g networking. IOPS are not the highest priority as I have a 1tb optane 905 I picked up on a sale earlier this year. Primary use at the moment would be an ISCSI share to my gaming pc for game storage, Nextcloud, and general network storage for random vm’s that I want on ssd but don’t want to take up space on the optane. The option of upgrading a 2 disk vdev as opposed to 4 or 8 disks does sound more reasonable. Additionally I would be able to rearrange the 4tb hdd I already have into a 6 disk raidz2 as an onsite back up target without any additional cost, in addition to my offsite backups.

1 Like

Games are random read heavy for the most part. Some request large chunks of sequential reads, but that’s trivial. Random reads is what slows down things. Writes aren’t a thing really.

You don’t want RAIDZ or wide RAIDZ in this case. We’re talking about a large 4k-16k blocksize zvol . This is as hardcore as it gets. And RAIDZ doesn’t like 4k random IO. And ARC can’t cache everything, so you rely on disks to some degree, depending on ARC size.

Network latency will be another bottleneck if the pool can handle the read IOPS. You won’t ever get the same performance as a local drive, but optimized pool config and proper volblocksize will get the most out of the disks.

RAIDZ will also waste more space on zvols, so you may end up with the same space as with mirrors, depending on how much stuff is 4k-8k zvols. Not that much for a 2T steam library on a 24T pool to avoid RAIDZ for this reason. But if half the pool is zvols, RAIDZ isn’t worth it.

1 Like

I think the fact that the OP is using SSDs for this instead of rust does open up the options significantly.

But yeah iSCSI over 10 GbE is a bit of a compromise - you’ll be limited to 1 gigabyte/sec throughput max, which is well below the performance of even a single modern high speed SSD.

Also your iSCSI command rate will be limited by the clock rate of 10 GbE.

You may wish to consider using it as a steam cache/backup location over SMB or NFS and storing your current games on a local disk for much better in game performance - instead of running off it directly via iSCSI.

Make sure to validate trim capability of the array.