ZFS Metadata Special Device: Z

For ZFS: Datacenter > Storage > enable thin provisioning for ZFS Pool you like (double click on line)

1 Like

I have a question regards ZFS metadata special device redundancy

The standard recommendation is to use a minimum of 2 way mirroring and use SSDs for fast access

However, my understanding is a n-way mirror would not protect against bit rot or other silent data corruption - which raidz-1/2/3 does because of checksum and parity that allows ZFS to detect block corruption and rebuild the corrupted block from parity if a block corruption is detected.

Using just plain mirrors for special metadata sevices does not leverage this awesome data integrity feature of raidz - and the metadata special device is the most critical single point of failure of a zpool - loss of this vdev results in the loss of the zpool.

Considering the above shouldn’t raidz-1 or better raidz-2 or 3 be the minimum configuration of metadata special device and not plain mirroring?

Am I getting something wrong here?

ZFS mirror is checksummed. It can correct single-disk bit rot in the case of 2-way mirror.

2 Likes

Thanks!

I have a question about storage configuration.
I use a special device with special_small_blocks=64K.
Currently, I’m using a ZVOL with a block size of 16k.
Will all the data from the ZVOL be stored on the special device?

If i had 4 identical U.2 nvmes that supports namespaces, does it make sense to have 2 namespaces, 1 for a hdd pool metadata device and 1 for a nvme pool, like for example 3.84TB u.2 and a 120TB HDD pool (0.3% is 360GB) splitting the U.2 to 500GB (metadadevice) and 3.34TB(nvme pool in a zfs Raid 10)

Are there any massive cons I’m missing? Especially if i triple or quad mirror the metadadevice or isit always best to keep the devices separate, in which requires a lot more PCIe lanes?


Well guys I messed up bad, is there no way to save my pool?
Instead of adding the special mirror I added a mirror to my pool while troubleshooting the original command…

Worse, I dont think you can remove /dev/sdq, but also, it is not a mirror (yet) so the whole pool is kinda being held up by a single drive…

I would attach a second drive (or two, preferably) to sdq

Sorry for the confusion, I added a mirror but my screenshot shows my attempts at fixing it halfway, I detached one drive which is why you see only sdq left.

zpool add MainPool -o ashift=12 mirror /dev/sdp /dev/sdq -f

was the command I used while trying to troubleshoot the syntax (yes I know)

Sorry for the pain, and I think you will have to rebuild the array/restore from backup to a new set.

What was the capacity filled?

Around 90 TB

1 Like

Thanks, I means percentage, roughly.

Would not have been ideal, but as the pool is hosed, was wondering if you had enough backup space, to back up and rebuild without the third top level vdev

I learned the hard way, to have enough capacity to completely rebuild, and that includes physical space for drives to attach.
Was luckily to learn that very early, and no data was lost, but that was luck.
I still run bargain basement chassis, and used enterprise drives, because drives die, and I can accept that.

1 Like

Around 60% filled, a little tough but manageable, just ordered a few more drives to help make things easier.

1 Like

While how I use ZFS might be a bit different, and a bit silly… (I just use it on desktop, and I don’t even have that many drives, just NVMe SSD, SATA SSD, and Rusty spinny HDD.

Now, before, I wasn’t considering it whatsoever… but decided to do a silly experiment. A 64 GB partition on nvme, to be used as a special vdev for HDD, alongside setting special_small_blocks. Now… I do know, it has higher risk, but, again. What I couldn’t bear losing, I have backups. So, this is solely to make HDD more comfy to use.

And man, after a while of using, the difference is astounding. Aside from playing games on this, barely feel like this is an actual spinny rust drive. And at such low price… I just love how ZFS let’s me get the most out of the existing hardware :smiley:

2 Likes

Hello everybody,

if I have a pool with SLOG (Optane P4800X with power loss protection), is the SLOG used also for the special device ?

My question is mostly : can I use consumer drives (as I already have them and prefer not to purchase new parts) for mirrored special device considering the consumer drives do not have power loss protection ?

The consumer drives do not have power protection and prefer not to loose any data in case of power loss.

I’ve finally started looking into building a ZFS-based NAS and man, I regret not buying some P1600X Optane drives for this when I saw Wendell’s video about the price drops.

I was a bit low on cash at the time and couldn’t afford to hoard a couple of 118GB drives for future use. Now they’re sold out at normal retailers and close to $200 a piece on ebay for new ones. :frowning:

I hope I can find a pair for a reasonable price at some point.

Does anyone here have experience using a pair of drives in a mirror for the Special Device on an ASM2812-based HBA card?