Return to

ZFS - RAID Pool degrades If I plug in another drive, or unplug a drive that isn't even in the pool

I’m committing the cardinal sin here where I ask a question in two different places. However after posting on AskUbuntu, I thought I might have better luck here.


In an effort to test what impact adding a ZFS log device would have to a ZFS array, I decided to create a zpool and perform some benchmarks before then plugging in an SSD to act as the ZIL.

Unfortunately, whenever I plug in the SSD after having created the zpool, or unplug the SSD after having created the pool (anything that causes drive letters to change after the pool has been created), and then reboot, my pool will become degraded as can be shown by running sudo zpool status

  pool: zpool1
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
  scan: none requested

	NAME                     STATE     READ WRITE CKSUM
	zpool1                   DEGRADED     0     0     0
	  mirror-0               DEGRADED     0     0     0
	    sda                  ONLINE       0     0     0
	    1875547483567261808  UNAVAIL      0     0     0  was /dev/sdc1

I suspect the problem stems from the fact that I created the pool using the drive letters like so:

sudo zpool create -f zpool1 mirror /dev/sdb /dev/sdc


Luckily for me, this is just a test and no risk of losing data, but should this happen in a real-world scenario, what is the best way to recover from this issue? Obviously the drive still exists and is ready to go.

Is there a better way to create the zpool without using the drive letters like /dev/sda to avoid this problem in future? I notice that the ubuntu documentation states to create a zpool in the same manner that I did.

Extra Info

  • OS: Ubuntu 16.04 Server 4.10
  • Installed zfs from installing zfsutils-linux package

You can use the disk ID (/dev/disk/by-id/) as this won’t change on reboot. Not sure how you can change it once it’s set up without removing and the adding each disk one at a time. I would have thought that it wouldn’t matter about the drive letters as once it’s formatted it should be able to work itself out, but I guess not.


Awesome. I’ll just test that that works now.

Just finished testing and it worked both ways (plugging in a drive after pool creation and unplugging a drive after having created the pool).

Any ideas how to resolve the label issue for other people when they experience this problem themselves once they are already in this situation?

Might be able to do it when importing the pool, otherwise the only way I can think of is to remove a disk, then add it with the disk ID and have the array rebuild before repeating the process for each other disk. Not sure if there’s a better way.

I just managed to get it to work by doing as you say:

sudo zpool export zpool1
sudo zpool import -d /dev/disk/by-id zpool1

However, I need to test whether this would have issues if I wrote any data since the reboot and degradation. I haven’t actually tried writing any data since the degradation and maybe it won’t allow me (fingers crossed).


Just tested after having written some text files before and after, then performing the export/import. Zpool status shows no data errors and my text files all look fine.

If you want to post these two parts as an answer, ill mark it as correct to give you the credit. If you don’t want to, or just CBA, let me know and I’ll post the answer.

e.g. use disk-by-id and use export/import to either convert it to prevent this issue in future, or fix the issue when it arises.

1 Like


so I did what this said as I had the same issue

however I need some help with it

in my /dev/disk/by-id/ I have nice names with the manufacture and model number and serial number but doing what you did it decided to switch to the wwn number which I have no way of associating which physical disk that is

how do I get it to use the ata-HGST name so I can tell which disk it is talking about