Add drive to ZFS pool

So I wanted to add a drive to my 3 disk raid Z array. I did a sudo zpool add -f poolz1 /dev/sdc and it seems to have worked…

Is it really that easy?? Not to mention that was after easily recovering it from having to reinstall the OS.

Is this witchcraft?

It’s good that you asked, unfortunately you cannot expand the width of a raidz vdev by adding a disk. What you likely did was add a single disk as a another top level vdev. This means any data written to that vdev has no error correction (only detection), and if the drive dies, so does the pool. It would normally warn you of this, but it looks like you used -f which forces the action to happen if it can.

What is the output of

zpool status poolz1

Or

zpool list -v poolz1

If what I think happened, did, then there’s no way to truly reverse this. The real fix to the problem is to do what is recommended to do in all cases:

  1. Replicate your pool to your backup array

  2. Scrub your backup array (NOT OPTIONAL. Wait for it to finish) and double check everything’s there

  3. Destroy your main pool, and remake the vdev’s how you want

  4. Replicate the data back to your main pool from the backup.

  5. Scrub the main pool (Also not optional) and check that everything’s there.

A temporary alternative is to add another identical drive to the single drive to make a mirror. I can guide you through that tomorrow evening if you like.

This is a very common mistake.


Also it’s always a bad idea to use /dev/sdX for adding devices. These labels can and will change on reboots, causing issues. Instead get the identifier with this (ignoring “-part”'s):

ls -l /dev/disk/by-id

And add whole vdevs like this (adding a disk to a mirror vdev uses attach, not add)

zpool add yourpool mirror /dev/disk/by-id/wwn-0x1... /dev/disk/by-id/wwn-0x2...

This is the easier problem to fix, the pool can be exported, and reimported explicitly to use by-id:

zpool export poolz1
zpool import -d /dev/disk/by-id poolz1
7 Likes

Yet…

The devs have wanted this to be a thing for years, but I’m sure it’s not there (yet)

I also think you added a single raidz0 vdev

But adding to a pool is a feature, ZFS is designed to grow, and a pool is designed for an operator who knows what they are doing to increase the size of the pool (by adding whole zdevs.)

The generally accepted guidance, is that one matches existing vdevs, to make the pool stable/ run more consistent.

One could (but shouldn’t) have a pool with a raidz2 vdev of four disks, then add a two disk mirror, making the pool like a hunchback, then add a single disk (which would act like a raid0 vdev) then add all sorts of vdevs. The, when ZFS goes to write a bit of data, it would share the data among the vdevs, using raidz2 on the first bit, mirroring the second bit, then putting the raw bit on the single vdev.
Then, if there is a cable error, a power fluctuation, a drive read error, or even fault on the single disk vdev, the whole pool ends, and has to be rebuilt from scratch.

The performance also gets weird with mixed vdevs…

So I mean to say, ZFS will very much give you a great big gun, let you point it at your own foot, and -f the trigger.

1 Like

It’s gonna be years before it happens unless someone picks it up and runs with it. All I’ve seen so far is brainstorming on how to expand DRAID. RAIDZ expansion would basically ride on the coattails of that solution. Edit: See links below

It’s a popular feature, but very low on actual priorities. :\

edit Actually, looks like years after going radio silent after various announcements, they may finally have something close-ish to beta-test

Still gonna be a while though before it’s in-in.

1 Like

But like you said, it really seems @TheF1sh added a sing drive vdev to the pool, making it wonky now, like:

Zpool status:
Tank
Raidz
/dev/sdb
/dev/sdc
/dev/sdd
Stripe
/dev/sde.

Or whatever single disk is listed as

It’ll likely show up as

NAME
poolz1
    raidz1-0
        sda
        sdb
        sdd
    sdc
3 Likes

yeah… no. that’s not right. you just destroyed all redundnacy. you basically just put that drive in R0 with the existing vdev

I did the same thing back in … whenever years ago, and needed to recreate the pool

At least these days, ZFS has a zpool-remove (which I’ve had the privilege to not have to try).

Kinda:

When the primary pool storage includes a top-level raidz vdev only hot spare, cache, and log devices can be removed.

just posted a guide on debian nas with encryption and btrfs… it’s a [wip] … but btrfs can add/remove disks with witchcraft … and is probably good enough for most DIY folks and their DIY data …

it’s probably good enough for more than DIY folks and their DIY data if they have another backup

Ok, I figured something was going to hang me up. You guys nailed it and it showed up just as @Log stated.

Luckily it’s just a test so there’s no data on it yet. I’ll start over on it. I really thought it was a feature of zfs to be able to expand it on demand.

Well, What’s the best way to delete this zpool to get all the zfs markers off the 3 drives so I can start over?

1 Like

sadly, yes.

good job on testing things out first !

Not being able to add a drive aside, I can’t believe how easy ZFS is to get going with. I know getting into the details will take some time, but I’m really impressed. Going to try some speed testing against the raid 6 array. Not that it matters but I’m curious.

Good to hear it was just a test. You can delete the pool with

zpool destroy poolz1

If you’re on a recent version of ZFS, it may be possible to recover from adding a single vdev. It’s also wise to create a checkpoint prior to doing any kind of disk operation on the pool. You can do this by running zpool checkpoint <pool>. Keep in mind you can only have one checkpoint per pool.

To remove a mirror or single device:

zpool remove <pool> <vdev>

From the zpool(8) man page:

Removes the specified device from the pool. This command currently only supports removing hot spares, cache, log devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.

I’ve done this to remove a mirrored vdev from an over-provisioned pool.

Forgot about checkpoints, never used them yet. I’ll have to test that one of these days.

I believe that mirror type vdevs can only be removed if there are no top level RAIDZ vdevs, AND all vdevs have the same ashift.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.