TrueNAS storage expansion question

Question for you experts

I have a pool made of 6 x 4TB SATA/SAS Disks, 1TB NVMe L2ARC, 2 x Intel DC S3700 800GB SATA SSD’s for Metadata. The 6 drives are in 3 mirrors.

I’m starting to get up there in used space. I’m at 65% now so I want to start planning the upgrade

I have 2 x 4TB SAS disks spare, and then at least 20 x 8TB SAS disks.

I am hesitant to swap out the disks with the 8TB SAS disks, as it would then leave me with a useless pile of 4TB disks I have zero use for. So I’m tempted to add another mirror of the left over 4TB disks, and from then on use the 8TB disks are replacments

If I add another mirror, will the data re-distribute itself across the pool? is that something I need to be concerned about?

And, let’s say I add a mirror of 8TB disks, to make it 4 + 4 + 4 + 8, will that cause a problem? will some data be faster than other data?

Yes, it will, but the way it will do it is to put more data on the mirror with the most space, every time it writes a stripe.

It won’t re-balance the existing data.

hypothetical example:

  • 50% full mirror 1 of 4G
  • new empty mirror 2 of 8 G added.

… new writes will go 80% to mirror 2 and 20% to mirror 1 (unless I screwed up my mental math). Your existing stored data stays put.

New writes will be striped across 2 mirrors. The old data will remain un-striped.

ZFS will handle it (i.e., it won’t be a problem in terms of functionality), but you are correct in assuming the new data will be faster than the old data.

I’ve run in a similar scenario for years by upgrading a 4 bay has (2 mirrors) 2 drives at a time for a while. I ran 2 unbalanced mirrors for about a decade without issue - for home use you’ll probably outrun gig ethernet with a single mirror most of the time anyway (at least for sequential data).

I have dual 10Gb ethernet, and I’m used to seeing that 1Gb/s transfer rate!

Is there a way to tell the system to re-stipe everything?

Sounds like the best idea may be to slowly replace with 8TB disks and expand the pool all at once

No. ZFS never moves data by itself. ZFS tries to load-balance as much as possible however. So you will see proportionally more allocation on the new mirror. I’m not sure if it will stripe normally across all vdevs until 100% capacity of the 4TB drives and then allocate everything in the new vdev, or if it will be a gradual process with vdev4 getting 25% more data. As far as I know ZFS, I assume the latter.

I wouldn’t call it a problem, but some data will be faster. To what degree I can not say as I never experienced this myself. Over time, when you modify or delete data, the uneven distribution will balance itself out. Or you restore the pool from backup, which eliminates these “legacy” imbalances. ZFS doesn’t have a balance feature like BTRFS, which is certainly on the wishlist of the community.

But asymmetrical vdevs are an intended feature, but the fact that the last vdev has double capacity gets you a 1-1-1-2 average allocation if aiming for equal % of capacity on all vdevs, that’s just physics and maths. But this is usually fine, because the 8TB drives are probably a bit faster in general and especially when at low capacity, writing on the outer edges of the platters.

I would just plug them in. I don’t think you will notice performance deficits. But feel free to post some iostat or zpool list if you feel like sharing disk utilization and capacity distribution.

If you are not happy with how things work, you can always remove the new vdev (mirrors are just great) and ZFS will automatically redistribute the data to the first 3 vdevs via evacuation process.

edit: good call on increasing pool size at 65%. Things start slowing down and fragmentation may start to become a thing. Certainly better than waiting for 80-90% from a performance perspective.

Thanks for the information guys, big help

So I can remove an entire mirror? For some reason I thought I could only add mirrors, not remove them

If I can remove them, I can add these 4TB disks to get more space, but then as they fail and I swap them out for 8TB disks, I could just remove the forth mirror

If all vdevs are mirrors and share the same ashift, you can remove entire mirrors. Also applies to special vdev. Removal is NOT possible with RAIDZ vdevs, this is why you probably remember it being not possible. RAIDZ has many disadvantages, device removal and attach/detach being one of them.

Replacing old with new disks is done via replace and autoexpand=on will take care of adding the new capacity to the pool once both sides of the mirrors have e.g. 8TB.

Otherwise: yes. But keep in mind that you can’t remove a vdev if the remaining pool capacity would result in >100% capacity after removal.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.