It’s good that you asked, unfortunately you cannot expand the width of a raidz vdev by adding a disk. What you likely did was add a single disk as a another top level vdev. This means any data written to that vdev has no error correction (only detection), and if the drive dies, so does the pool. It would normally warn you of this, but it looks like you used -f which forces the action to happen if it can.
What is the output of
zpool status poolz1
Or
zpool list -v poolz1
If what I think happened, did, then there’s no way to truly reverse this. The real fix to the problem is to do what is recommended to do in all cases:
Replicate your pool to your backup array
Scrub your backup array (NOT OPTIONAL. Wait for it to finish) and double check everything’s there
Destroy your main pool, and remake the vdev’s how you want
Replicate the data back to your main pool from the backup.
Scrub the main pool (Also not optional) and check that everything’s there.
A temporary alternative is to add another identical drive to the single drive to make a mirror. I can guide you through that tomorrow evening if you like.
This is a very common mistake.
Also it’s always a bad idea to use /dev/sdX for adding devices. These labels can and will change on reboots, causing issues. Instead get the identifier with this (ignoring “-part”'s):
ls -l /dev/disk/by-id
And add whole vdevs like this (adding a disk to a mirror vdev uses attach, not add)
The devs have wanted this to be a thing for years, but I’m sure it’s not there (yet)
I also think you added a single raidz0 vdev
But adding to a pool is a feature, ZFS is designed to grow, and a pool is designed for an operator who knows what they are doing to increase the size of the pool (by adding whole zdevs.)
The generally accepted guidance, is that one matches existing vdevs, to make the pool stable/ run more consistent.
One could (but shouldn’t) have a pool with a raidz2 vdev of four disks, then add a two disk mirror, making the pool like a hunchback, then add a single disk (which would act like a raid0 vdev) then add all sorts of vdevs. The, when ZFS goes to write a bit of data, it would share the data among the vdevs, using raidz2 on the first bit, mirroring the second bit, then putting the raw bit on the single vdev.
Then, if there is a cable error, a power fluctuation, a drive read error, or even fault on the single disk vdev, the whole pool ends, and has to be rebuilt from scratch.
The performance also gets weird with mixed vdevs…
So I mean to say, ZFS will very much give you a great big gun, let you point it at your own foot, and -f the trigger.
It’s gonna be years before it happens unless someone picks it up and runs with it. All I’ve seen so far is brainstorming on how to expand DRAID. RAIDZ expansion would basically ride on the coattails of that solution. Edit: See links below
It’s a popular feature, but very low on actual priorities. :\
edit Actually, looks like years after going radio silent after various announcements, they may finally have something close-ish to beta-test
just posted a guide on debian nas with encryption and btrfs… it’s a [wip] … but btrfs can add/remove disks with witchcraft … and is probably good enough for most DIY folks and their DIY data …
it’s probably good enough for more than DIY folks and their DIY data if they have another backup
Ok, I figured something was going to hang me up. You guys nailed it and it showed up just as @Log stated.
Luckily it’s just a test so there’s no data on it yet. I’ll start over on it. I really thought it was a feature of zfs to be able to expand it on demand.
Not being able to add a drive aside, I can’t believe how easy ZFS is to get going with. I know getting into the details will take some time, but I’m really impressed. Going to try some speed testing against the raid 6 array. Not that it matters but I’m curious.
If you’re on a recent version of ZFS, it may be possible to recover from adding a single vdev. It’s also wise to create a checkpoint prior to doing any kind of disk operation on the pool. You can do this by running zpool checkpoint <pool>. Keep in mind you can only have one checkpoint per pool.
To remove a mirror or single device:
zpool remove <pool> <vdev>
From the zpool(8) man page:
Removes the specified device from the pool. This command currently only supports removing hot spares, cache, log devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
I’ve done this to remove a mirrored vdev from an over-provisioned pool.