Replacing special vdev from raidz pool

Hi there
I have a raidz3 pool with a mirrored special metadata vdev for better performance.
Right now this is running in a VM server is running Ubuntu 20.04 from inside VMware.
The raidz3 disks are presented via a SAS HBA presented directly to the VM.
The special vdev is however presented via VMDKs placed on two different NVME disks.
I would like to reconfigure this to run TrueNAS Scale directly on the server instead of VMWare. And this would work great, if it was not for the special meta devices which are kinda “locking” me to the current setup :slight_smile:
As far as I can read and what I have tested with TrueNAS, you are not able to remove a special vdev from a pool which has raidz configured…
(is there a know workaround for this, or a fix in the future?)
Another way that I could see would be to present a NVME disk directly to the VM, then do a replace of each VMDK based mirror onto the NVME disk… I would think this should be possible?

Just to be sure, the special metadata device is not a device you want to loose ?
I guess it holds the “inodes” of the filesystem, but what would happen if it god lost… is there a way to rebuild it from the pool?

(I know… backup is your friend in this case…) :slight_smile:

Any input is appreciated

/Beardmann

You lose a vdev, you lose your pool. There is no zfs mechanic to remove a top level vdev from the pool except for a pool consisting entirely of mirrors. Only way to “do” this is by deleting the pool and restore from backup.

Not sure what you mean by that. What is a VMDK based mirror?

You can replace/detach as usual for any mirror vdev.

Just a note about the VMDK “mirror”
My current special vdev consists of two virtual disks presented by ESXi and they are VMDK files on two seperate NVME disks that the ESXi server controls.
As I mentioned I would like to move towards a “baremetal” TrueNAS, so I have to get rid of the ESXi presented disks. So the plan is to add two other NVME disks to the ESXi host, but present them directly to the VM that has the zpool… and then replace the special vdev mirrors with zpool replace…

Anyway, it makes sense in my head, and I think I will give it a try :slight_smile:

1 Like

Creating a temporary file on a hard drive, then switching each vmdk to it and back to bare/basic drives is the way I would go too…

Or more verbosely, and it looks like you were already expecting it:
create a holding (temporary) file
Attach temp file to one side of special mirror
Remove that special mirror vmdk
Attach the drive that contains the removed vmdk to the remaining vmdk
Remove that vmdk, and use it’s drive to replace the temporary file.

That was also a way to do it, yet I like to do it onto physical disks… so I added two 1.2T 10K SAS disks to my system’s “special vdev”, and added one of them to the mirror… (zpool attach…) and it is now resilvering… at about 15-20MB/sec. (with about 200 IOPs) which I just do not understand… first of all the two other disks in the mirror are both NVME disks.
The pool has close to no load… The disk I added is a spinning rust disk, yet it is a high performance enterprise disk… (Seagate ST1200MM0018).
Also the resilvering process is at a loss since it tells me 99,97% has been done, and there is 00:00:02 to go and it has been telling me this for several hours… my estimations tells me it will take 12-15 hours (for a mirror that is 600GB) :wink:

I guess this is yet another example where performance can be improved…

Now I’m just hoping that the mirroring process will only mirror the used data in the mirror… (zpool list -v) shows that only 100GB of the 600GB is used right now… yet I doubt it…

1 Like

Well… I managed to mirror the VMDK based special vdevs over to physical disks.
So far so good…
I then shut down the VM, moved the HBA over to another physical server where TrueNAS was already installed…
I fixed the /etc/zfs/vdev_id.conf and I was able to see my pool with “zpool import”.
Yet, as I tried to import it with “zpool import -f -d /dev/disks/by-vdev pool” it failed with “I/O Error” and told me I had to restore my pool…
Very strange…

Just to verify that the pool was OK, I reversed the operation, and it mounted OK on the Virtual Ubuntu Machine…

I must admid that I forgot two things in this process:

  1. I didn’t do an Export of the pool on the source host. (but a “import -f” sould take care of that?
  2. At first try, I forgot that my special vdev disks were on another path which was not connected… so it failed… I then attached the other path, and it was able to see the devices OK. But still it complains and will not import the pool…

The pool I am trying to create is created on Ubuntu 20.04 and OpenZFS… if I do a “zpool get version pool” I just get a “-” which I think is normal?
But can that be the reason?
zpool version returns:
zfs-0.8.3-1ubuntu12.13
zfs-kmod-0.8.3-1ubuntu12.13

And sorry that I do not have the specific error message as I did it via the web-console which is gone now the server is offline again…

Really annoying issue… does anyone have any ideas before I give it another try with some more documentation… :slight_smile: