ZFS Migration Hell

This week I started my data migration from my old TrueNas server (4 drives raid10) to my new server (6 drives raid Z2). Here’s the scope of my problem:

  • I don’t have a motherboard that supports 10 drives (Data transfer must be done over ethernet)
  • Jails and config must be retained. There’s a lot of scripting that I don’t want to set back up.

I first tried the Replication task. It copied most of my files but not all of them. Wiped the new server and tried again but by that time the replication task was broken.

So for anyone who is going through “cannot unmount” or “broken pipe” hell, this is what I did:

  1. Wipe new pool and make sure no snapshots are left.
  2. Destroy all snapshots on old server.
  3. Create new snapshot for everything
  4. ZFS send using new snapshot.

Onto my current problem, I transferred from zpool “Dataset1” to “DatasetN”. Changing the name was suggested to me on some guide. I then imported the config from the old server and ran zpool import DatasetN Dataset1. All the files are transferred and show in the directory. However, the TrueNAS WebUI Storage section only shows one filesystem. I’m gonna spend the next day working this out and will update here if I find a solution. Open to suggestions.

I can’t help with TrueNAS specific questions. Next time try Syncoid that is part of the Sanoid package. On Distributions like Ubuntu it is already available from the official repositories. Copying a dataset over SSH should be a matter of:

syncoid -r sourcepool/sourcedataset user@remote:targetpool/targetdataset

It is even resumeable. Only caveat might be, that you need to give the receiving user on the remote host privileges to run a zfs command without querying for a password. So you either send to root@remotehost or you put a NOPASSWD line for the receiving user in /etc/sudoers on the remote host.

2 Likes

I thought JRS coded it that you can send over SSH, and if you’ve copied your keys across, then login not required?

Permissions to write/mount on the other hand… I’m not so sure.

Also not sure Mr S made it compatible with BSD? [edit, he has some instructions for BSD compatibility]

But I really like sanoid/syncoid between 'buntu boxes.

Wasn’t there a tool “xfer” or similar used in BSD for similar?

[thinking of this:]

But either way, I would suggest seperating data from OS, for easier transfers / seperation / baking up.
None of which helps OP :frowning:

2 Likes

It’s BSD based, I did not think about that.

Syncoid runs a long command with many pipes. I requires to run sudo zfs receive as the last step on the remote host and can’t query for a password.

1 Like

So my current plan is to rename the directories, create the volumes in the dataset, and move the directories contents into the volumes. Hopefully after reloading the config the jails will just run on startup. Unfortunately, moving files across volumes requires rewriting the files completely so I’ll be waiting a few hours as the size is currently around 4 TB.

If that doesn’t work, I’ll be resending the files using syncoid as @anon86748826 suggested. Thanks for the help so far.

Hello,
This type of transfer is something I’ve done many times over the years both in my lab and IRL.

They key here is that the ZFS send/receive that this is using relies on snapshots. Configure your snapshots correctly, selecting each dataset individually. Then use the replication feature to move those snapshots from one system to the other.

If you need to do the same with ISCSI you will need to retain your current configuration exactly, including the WWN of your extent.

If you’re comfortable with the command line, taking a snapshot and sending it over SSH would probably be your best option.

zfs snap pool/dataset@tobemoved
zfs send pool/dataset@tobemoved | ssh user@host zfs recv pool/dataset

If you have nested datasets, you need to run a snap using:

zfs snap -r pool/dataset@tobemoved

And to send recursively:

zfs send -R pool/dataset@tobemoved | ssh user@host zfs recv pool/dataset

Have you checked if Syncoid also runs on BSD or are you using Linux to copy the datasets? Because I forgot that TrueNAS is BSD based. I have used Syncoid on Linux only thus far!

It’s compatible according to the GIT repo. I would just need to change the perl mapping.

MV stopped after 1TB last night. Awesome. I’m resuming it and attached a notification for when the mv command ends.

1 Like

Files transferred! I had to copy the last few directories by hand as the old drives started erroring out and locking up ZFS send.

Took longer than expected thanks to:

  • System crashes (New and Old Server)
  • Power Outages
  • Dead Network Switch

The jails did come back and boot up. However, the services inside them are not starting up. At this point I’m just making new jails and transferring the configs between them.


For those looking to do a total TrueNAS migration in the future, here are the steps I would take:

  1. Export TrueNAS config of old server.
  2. Install same TrueNAS version on new server.
  3. Create zpool on new server with same name.
  4. opt. Delete all snapshots. (If you don’t need the file history, it can be a lot of extra data to move.)
  5. Create new snapshot reclusive over all datasets.
  6. Boot Ubuntu on old server and mount zpool.
    (TrueNas cannot get it’s grubby hands off of the zpool no matter what I tried. It even remounted an exported pool halfway through a zfs send, locking it up.)
  7. ZFS send Datasets.
  8. Upload config to new server.

Takeaways:

  • I’m still a firm believer in the ZFS file system, but I’m about done with TrueNas and BSD. I cannot trust the TrueNas web GUI and I’m failing to see the benefit of jails over Docker.
  • Make more datasets. Compartmentalizing storage at the ZFS level just makes life so much easier.
  • For TrueNas, never store data directly on the root dataset. TrueNas will store some important files there and it can make migration annoying when they keep remounting.

Anyways, the old HDDs are blown out. They were heavily used to begin with and then I had to restart the stupid transfers more than I can count.

Thanks for the support everyone!

2 Likes