Helps expanding zfs server

Hey every hope everyone had the best holidays without too many system issues!
I was asked by my boss to increase the capacity of one of our client servers, they are running truenas scale with a topology of 8x4TB ironwolfs in raid-z2 and a single ssd as a metadata special vdev.
They already bought 8x8TB ironwolfs and a couple of Crucial MX500s ssd.
First of all i would like replace that single ssd with a raid of ssd because i’ve read that it’s a single point of failure because it’s a special vdev another option is to remove the ssd entilrely because from what i understand it’s not being utilzed. This is the output of zpool iostat -v
special - - - - - -
sdh1 18.3G 446G 0 11 120 204K
What would be the best approach to solve all this issues?
To create a new pool with the new disks and two new ssds as a metadata vdev? and then transferinng the data between pools?
Is it worth it to have a metadata vdev? The client mostly stores big video files.
Another option i tought is to add the new disks as a vdev to the existing pool and then transfering the data between vdev? is it even possible?

Thank you very much!

1 Like

There are several users here, who actually work with ZFS on the forum, and hopefully one will give you a correct answer soon-ish.

In the mean time, the special vdev, is the really weak point int he setup.

I would even say, it is removing the safety of the raidz2, all on it#s own

Double check what I say for yourself but…

Personally I would attach (not Add) another SSD to the existing special vdev, to make it a mirror.

Then look at how much the capacity has grown in what space of time, with regards to adding the extra dives as an additional vdev, or replacing existing drives with larger. I would say replace, in this case.

As it’s an enterprise system, their priorities may be different, but the existing drives, may be older, and headed towards failure. In a personal setting, we would generally hold on to existing older drives, and Add the extra drives as an additional vdev, but if this fix / solution is for an enterprise system, to last another 5/10 years between major change, then replacing, drive by drive, might be a smarter move.

All drives die, and Raidz2 should still be safe with 8X8, so they can get warranty replacements as and when the drives die.
Not sure how long the warranty lasts on the older ones, and the cost of maintaining/sourcing the 4tb ones, for a further X number of years, might not be attractive to the customer

In a personal setting, anything over 8tb, I would possibly look at z3. but in this case, the customer already has the new drives.

Another thing to consider, is the current backup capacity the at the customer has.
raid is good for uptime, and can be used in a backup system, but it is not itself a backup.
The backup system needs to support the critical data, so should also match the required portion of the data they customer needs (they might not care about the video data being saved, but they might backup all other company info) Perhaps the customer might make use of their left over 4tb drives in a backup / colder system?

Hi thank you very much for your kind response the drives are newish as they were installed just last year so the upgrade was sold as an additional 40TB.

Yeah that single ssd as a metadata special vdev is scary because it introduces a single point of failure.

if i connect another similar ssd how would i go about setting up a mirror? is it possible to do it in the truenas scale ui or i need a shell command?

thank you!

1 Like

on the commandline, the command should be

zpool attach -nv poolname existingdeviceid newdeviceid

without the -nv which are for dummy run / overview another sxample might be

zpool attach -nv tank /dev/disk/by-id/wwn-0x5003446378deadbeef /dev/disk/by-id/ata-crucial_mx500_12348756432789

or whatever the /dev/adX or /dev/sdX number the new drive is when inserted.

it seems the UI button might maybe be “extend” but that also might Stripe instead of Mirroring, so I would be very careful before pulling the trigger

first build the new pool and migrate the data with zfs send, then you can play with the old one.

Okay thank you everyone!
I’ll try asap after the holydays and let you know!